You are on page 1of 210

Telemark University College

Faculty of Technology
DAQ training course
Introduction to SCADA systems, OPC, Real-time systems and
DAQ systems.
c _ Nils-Olav Skeie (NOS)
January 5, 2011
Contents
Preface vii
I SCADA systems 1
1 Industrial IT systems 2
1.1 Control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Process control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 System Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.3 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.1 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 The Functions of a Computer Control System . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 SCADA 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 User Interface (UI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Alarm system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 SCADA Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 SCADA control and monitoring devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 RTU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.2 DCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 PLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.4 PAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 RTU Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Open or Closed Control Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.2 PID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.3 CNC Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.4 Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.5 Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Superior SCADA systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1 ERP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.2 MES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.3 IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.1 Safety system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.2 Shutdown system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.8 Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9 Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
i
CONTENTS ii
II OPC 22
3 Introduction 23
3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Software application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Communication model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.1 Client/server model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.2 Publisher/subscriber model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4 OPC specication 30
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3 OPC Common . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 OPC Data Access (OPC DA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.5 OPC Alarms & Events (OPC AE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.6 OPC Data eXchange (OPC DX) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.7 OPC Historical Data Access (OPC HDA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.8 OPC Batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.9 OPC Complex Data (OPC CD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.10 OPC Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.11 OPC XML-DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.12 OPC Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.13 OPC UA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 OPC system 46
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Why use OPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.3 OPC development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.4 OPC test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6 OPC exercise 48
III Real-Time System 55
7 Introduction 56
7.1 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.2 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.3 Embedded system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.4 CPU and microcontrollers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8 Specications 60
8.1 Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
9 System architecture 62
9.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
9.2 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
9.3 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10 Synchronization 67
10.1 Semaphore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.2 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
10.3 Interprocess communication (IPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.3.1 Pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.3.2 Message queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.3.3 Shared memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.4 Communication Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
CONTENTS iii
10.4.1 Token Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.4.2 CSMA/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
11 Resources 72
11.1 Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
11.2 Critical region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12 Software modules 76
12.1 Instruction time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
12.2 Software application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12.2.1 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12.2.2 Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12.2.3 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12.3 Core and Multicore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.4 Input monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.4.2 Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.5 Watchdog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13 Design 85
14 Programming 86
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
14.2 Memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
14.3 Posix.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
14.4 C# example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
15 Operating systems 90
15.1 RTOS requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
15.2 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
15.3 Windows / Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
15.3.1 Windows history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
15.3.2 Windows CE or Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
15.3.3 Windows XP Embedded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15.4 QNX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15.5 VxWorks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
16 RT system 96
16.1 Benets of any RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
16.2 Cost of RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
16.3 Contents of a RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
IV DAQ systems 97
17 Sensor overview 98
17.1 Sensor device types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
17.1.1 Passive or Active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
17.1.2 Absolute or Relative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.1.3 Point or Continuous Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.1.4 Contact or non-contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.1.5 Invasive or Intrusive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.2 Sensor device properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.2.1 Concepts for Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
17.2.2 Concepts for Operating Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
17.2.3 Concepts for Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
17.3 Sensor output signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
17.4 Dynamic measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
17.5 MEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
CONTENTS iv
18 Signal condition systems 108
18.1 Amplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
18.1.1 Bandwidth distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
18.1.2 Common-mode rejection ratio (CMRR) . . . . . . . . . . . . . . . . . . . . . . . . 111
18.1.3 Input and output loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
18.2 Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
18.3 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
18.3.1 Low pass lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
18.3.2 High pass lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
18.3.3 FIR or IIR lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
18.4 Dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18.5 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18.6 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18.7 Combiner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18.8 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18.8.1 Low-level analog voltage signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
18.8.2 High-level analog voltage signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
18.8.3 Current-loop analog signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
18.8.4 Digital signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
18.9 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
19 Data Acquisition Systems 124
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
19.2 Digital representation of numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
19.2.1 Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
19.2.2 Floating numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
19.3 ASCII codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
19.4 DAQ parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
19.4.1 Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
19.4.2 Digital inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
19.4.3 Digital outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
19.4.4 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
19.4.5 Digital to Analog Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
19.4.6 Analog to Digital Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
19.4.7 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
19.4.8 Reference Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
19.4.9 Single-Ended and Dierential Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . 140
19.4.10Number of channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
19.4.11Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
19.4.12Range, Gain and Measured Precision . . . . . . . . . . . . . . . . . . . . . . . . . . 141
19.4.13Software calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
19.4.14Transfer of A/D conversion to system memory . . . . . . . . . . . . . . . . . . . . 141
19.5 Range check of signal values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
20 Communication 143
20.1 Communication architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
20.1.1 Current loop communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
20.1.2 Serial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.1.3 Network communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.1.4 Instrument control buses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.1.5 Wireless communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.2 Wireless Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
20.2.1 Bar Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
20.2.2 RFID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
20.2.3 RFID or Bar Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
20.2.4 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
20.3 Wireless sensor network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
20.3.1 ZigBee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
CONTENTS v
20.3.2 Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
20.3.3 Wireless HART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
20.3.4 Wireless Cooperation Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
20.3.5 Comparison of wireless standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
20.4 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
21 Discrete Sampling 159
21.1 Sampling-rate theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
21.2 A/D conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
21.3 Simultaneous Sample and Hold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
21.4 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
21.5 Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
21.6 Folding diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
21.7 Spectral analysis of Time varying signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
21.8 Spectral Analysis using the Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . 167
21.9 FFT diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
21.10Selecting the sampling rate and ltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
21.11Dynamic range of the lter and A/D converter . . . . . . . . . . . . . . . . . . . . . . . . 169
21.12Time interleaved A/D converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
21.13Nyquist Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
22 Logging 171
22.1 Sensor data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
22.2 Historical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
22.3 Trend curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
23 Statistical analysis of Experimental data 173
23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
23.2 General concepts and denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
23.2.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
23.2.2 Measure of central tendency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
23.2.3 Measures of dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
23.3 Historgram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
23.3.1 Examples using the room temperatures . . . . . . . . . . . . . . . . . . . . . . . . 176
23.4 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
23.4.1 Probability Distribution Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
23.4.2 Some probability distribution functions with engineering applications . . . . . . . . 177
23.4.3 Parameter estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
23.4.4 Criterion for rejecting questionable data points . . . . . . . . . . . . . . . . . . . . 180
23.4.5 Correlation of experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
23.5 Uncertainty budget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
24 Calibration 185
24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
24.2 Calibration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
24.3 Calibration of sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
24.4 Calibration Certicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
V Documentation 190
25 Guidelines for planning experiments 191
25.1 Overview of an experimental tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
25.1.1 Problem denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
25.1.2 Experimental design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
25.1.3 Experimental construction and development . . . . . . . . . . . . . . . . . . . . . . 192
25.1.4 Data gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
25.1.5 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
25.1.6 Interpreting the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
CONTENTS vi
25.1.7 Conclusion and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
25.2 Activities in experimental projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
25.2.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
25.2.2 Cost Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
25.2.3 Dimensional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
25.2.4 Determining the Test Rig Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
25.2.5 Uncertainty Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
25.2.6 Calibration/testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
25.2.7 Test Matrix and Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
25.2.8 Documenting Experimental Activities . . . . . . . . . . . . . . . . . . . . . . . . . 194
25.2.9 Group projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
26 Meetings 195
27 Guidelines for documenting experiments 197
27.1 Informal report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
27.2 Formal report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
27.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
27.3.1 Harvard style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
27.3.2 Vancouver style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
27.4 Article or paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Bibliography 199
Index 201
Preface
This document contains sections for SCADA systems, OPC protocol, real-time systems, DAQ systems,
and sensor signal conversions. The history of this document is:
Revision Changes / Extensions Who Date
0.1 First version for the DAQ workshop at TUC NOS 5-JAN-11
c Nils-Olav Skeie: Permission is granted to distribute single copies of this document
for non-commercial use, as long as it is distributed as a whole in its original form, and the
name of the author is mentioned.
vii
Part I
SCADA systems
1
Chapter 1
Industrial IT systems
The process industry is using more and more IT systems and a PC is prefered due to the low price and
the software available. A process IT system can be a monitoring system or a control system, or any
combination. The monitoring system is just for monitoring a process and contains only input modules
and some sort of I/O devices for the user information. The control system consists of a monitoring
part used for input of informations and an output part for controlling the process, together with some
sort of I/O devices for the user information. These system can be a stand alone system consisting of
only a single computer or distributed systems consisting of several, tens or even hundres of computers
interconnected in dierent ways.
1.1 Control system
A control system is a device or set of devices used for managing the behavior of another system or a
process. The control system will take the input from one or several sensing devices and perform some
sort of action on a set of actuators, depending on the input signals and the algorithm. Two dierent
types of control system exists, logic control and feedback control.
Logic control system, or sequential control system, is responding to simple input signals, often
on/o signals. Normally these systems perform a sequence of operations due to an input signal, there
they are also called sequential control systems.
Feedback control system, or linear control system, is using continous feedback signals from the
system for the control algoritm. A PID controller is an example of such a controller where the dierence
between a set point signal and the feedback signals are used for control.
Also note that some physical systems are not controllable.
Fuzzy logic is a combination of logic control and feedback control to combine some of the design
simplicity of logic control with the utility of feedback control.
An automated system is a collection of devices working together to accomplish tasks or produce a
product or family of products. The main functions for a control system is shown in Figure 1.1 where a
control system is connected to a SCADA system.
1.2 Process control system
A process control system will monitor and control some sort of process. Processes can be described by
their starting and stopping points, and by the kinds of changes that take place in between. The type of
processes is shown in Figure 1.2 and can be:
discrete; Found in many manufacturing, motion and packaging applications. Robotic assembly, such
as that found in automotive production, can be characterized as discrete process control. Most
discrete manufacturing involves the production of discrete pieces of product (www.wikipedia.org
2006),
batch; Batch jobs can be stored up during working hours and then executed during the evening or
whenever the computer is idle. Batch processing is particularly useful for operations that require
the computer or a peripheral device for an extended period of time. Once a batch job begins, it
2
CHAPTER 1. INDUSTRIAL IT SYSTEMS 3
Figure 1.1: The main functions for a process control system; monitoring, control and interconnections.
Figure 1.2: The type of processes that can be monitored and/or controlled by a process control system.
CHAPTER 1. INDUSTRIAL IT SYSTEMS 4
Figure 1.3: A distributed computer system.
continues until it is done or until an error occurs. Note that batch processing implies that there is
no interaction with the user while the program is being executed,
continuous; Often, a physical system is represented though variables that are smooth and unin-
terrupted in time. The is a system that should run all the time and are often named a 24r7
system.. meaning running 24 hours a day, 7 days a week. The control of the water temperature
in a heating jacket, for example, is an example of continuous process control. Some important
continuous processes are the production of fuels, chemicals and plastics. Continuous processes, in
manufacturing, are used to produce very large quantities of product per year(millions to billions of
pounds) (www.wikipedia.org 2006),
hybrid; Applications being some sort of combination of discrete, batch and continuous process
control.
1.3 Distributed Systems
An industrial system can be a single computer system or a distributed system with several devices
interconnected. A single system is often used on very small processes or plants, but normally several
dierent IT systems are cooperating for solving the monitoring and/or controlling tasks. The reason for
using several systems or devices are
1. to exploit the functions of dierent systems or devices,
2. redundancy,
3. better overview and structure,
4. troubleshooting.
The drawback of using several systems are the price and a more complex system. No perfect single
system exists for controlling purposes, so a cooperation between dierent systems is prefered. Such a
system is shown in Figure 1.3.
A distributed control system (DCS) refers to a control system usually of a manufacturing system,
process or any kind of dynamic system, in which the controller elements are distributed throughout the
system with each component sub-system controlled by one or more controllers. The entire system of con-
trollers are connected by some sort of networks for communication and monitoring (www.wikipedia.org
2006).
A distributed system will have a more complex system structure than a single system, but each sub
device will end up with a more simple structure than a signle system. The distributed system can also
have a higher degree of redundancy.
One example of a distributed system is shown in Figure 1.4. The system contains several controllers
for measurement and control of dierent parts in the plant, local displays, a local area network for sharing
information and several display systems for remote operations.
CHAPTER 1. INDUSTRIAL IT SYSTEMS 5
Figure 1.4: A distributes system using a set of local controllers, a local area network (LAN), and systems
for remote operations (Caro 2004).
1.4 System Reliability
1.4.1 Introduction
The main branches of reliability (Rausand & Hyland 2004):
1. hardware reliability,
(a) the physical approarch; technical items,
(b) the actuarial approach; operating loads and strength,
2. software reliability,
3. human reliability.
Some system reliability denitions (Rausand & Hyland 2004):
1. Reliability; the ability of an item to perform the required function, under given environmental and
operational conditions and for a stated period of time.
2. Quality; the totality of features and characteristics of a product or service that bear on its ability
to satisfy stated or implied needs.
3. Availability; the ability of an item (under combined aspects of its reliability, maintainability, and
maintenance support) to perform its required function at a stated instant of time or over a stated
period of time.
4. Maintainability; the ability of an item, under stated conditions of use, to be retained in, or restored
to, a state in which it can perform its required functions, when maintenance is performed under
stated conditions and using prescribed procedures and resources.
5. Safety; freedom from those conditions that can cause death, injury, occupational illness, or damage
to or loss of equipment or property.
6. Security; dependability with respect to prevention of deliberate hostile actions.
7. Dependability; the collective term used to describe the availability performance and its inu-
enceing factors: reliability performance, maintainability performance, and maintenance support
performance.
CHAPTER 1. INDUSTRIAL IT SYSTEMS 6
Figure 1.5: The failrates over time for a given system or sub system. The shape is often known as the
bathtub function.
1.4.2 Estimation
Applications for estimating system reliability (Rausand & Hyland 2004):
1. risk analysis,
2. environmental protection,
3. quality,
4. optimization of maintenance and operation,
5. engineering design,
6. verication of quality/reliability.
1.4.3 Computation
Computation of reliability (Olsson & Rosen 2003):
It is assumed that the possible errors are independent events, that they are not depending of each
other. This assumption is correct as long as a faulty component does not inuence the others and have
a causal eect on their functionality (Olsson & Rosen 2003).
Using : components in the system, these components will operate or be faulty as:
: = :
o
(t) +:
}
())
where :
o
(t) is the number of operating components and where :
}
()) is the number of faulty componetns,
both numbers will be a function of time; : will be constant. The reliability function 1(t) is dened as
follows (Olsson & Rosen 2003):
1(t) =
:
o
(t)
:
= 1
:
}
())
:
A measure of the system is the MTTF (Mean Time To Failure) given as:
'TT1 =
_
1
0
1(t) dt =
1
`
where ` is the fault rate.
The fault rate for hardware devices often gives a shape known as the bathtub function, with several
early faults in the beginning, a section with random faults (constant fault rate), and ends with wear-out
faults. This shape is shown in Figure 1.5.
The availability of the system is measured as an average value in time intervals in which the system
operates correctly, calles the MTBF (Mean Time Between Failures). The average time intervals in which
CHAPTER 1. INDUSTRIAL IT SYSTEMS 7
Figure 1.6: The mean time between failures for a device or a system (www.wikipedia.org 2006).
the system is not working is called the MTTR (Mean Time To Repair). The availability of a system A
is dened as (Olsson & Rosen 2003):
=
'T11
'T11 +'TT1
1.5 Redundancy
Redundancy is the duplication of critical components of a system with the intention of increasing relia-
bility of the system, usually in the case of a backup or fail-safe (www.wikipedia.org 2006).
There can be 3 types of redundancy (Fortuna, Graziani, Rizzo & Xibilia 2007):
1. Physical redundancy; physically replicating the components to be used,
2. Analytic redundancy; the redundant source will be a mathematical model of the component,
3. Knowledge redundancy; The redundant source consists of heurisitic information about the system.
Another way of dividing the forms of redundancy can be like (www.wikipedia.org 2006):
1. hardware;
(a) dual modular redundant (DMR) has duplicated elements which work in parallel to provide
one form of redundancy,
(b) triple modular redundancy (TMR) is a fault tolerant form of N-modular redundancy, in which
three systems perform a process and that result is processed by a voting system to produce
a single output. If any one of the three systems fails, the other two systems can correct and
mask the fault. If the voter fails then the complete system will fail. However, in a good TMR
system the voter is much more reliable than the other TMR components.
2. information;
(a) error detection and correction,
(b) soft sensor,
(c) disk arrays,
3. time;
4. software.
An important property when deciding any form of redundancy is the Mean Time Between Failures
(MTBF). MTBF is the mean time between failures of a system, see Figure 1.6. The calculation of the
MTBF will be:
'T11 =

(don:ti:c njti:c)
:n:/cr o) )ai|nrc:
Calculations of MTBF assume that a system is "renewed", i.e. xed, after each failure, and then returned
to service immediately after failure. The don:ti:c and the njti:c values are according to Figure 1.6
meaning that the 1
1T1J
will add up the time the system is not working dividing on the number of
times it is not working. The average time between failing and being returned to service is termed mean
down time (MDT) or mean time to repair (MTTR) (www.wikipedia.org 2006).
CHAPTER 1. INDUSTRIAL IT SYSTEMS 8
1.5.1 Communication
Several nodes, computer with communication capabilities, are often connected to the same communica-
tion media. The communication media, called a bus, can be wire, optic, or wireless. A single failure of
the wire or optic can stop all communication in the system. Several types of redundancy exists:
ring redundancy; both ends of the communication media is connected to the master,
sub ring redundancy; only parts of the network has redunancy,
master redundancy; several masters monitoring the trac of the network.
1.6 Cluster
A computer cluster is a group of linked computers, working together closely so that in many respects
they form a single computer (www.wikipedia.org 2010). These computers are normally interconnected
through a fast local area networks. Cluster of computers are usually deployed to improve performance
and/or availability over that of a single computer. The solution with a cluster of computers are typically
much more cost-eective than single computers of comparable speed or availability.
Redundancy is duplication of computers, while cluster is computers working together.
1.7 The Functions of a Computer Control System
The main functions of a process control system is shown in Figure 1.1. These functions are:
1. process monitoring; modules for collecting and interpretation of data from the plant,
2. control; modules for controlling some parameters for the plant,
3. connections; interconnections between the process monitoring and control for processing fo input
and output data, for feedback and automatic control.
Chapter 2
SCADA
2.1 Introduction
Supervisory Control And Data Acquisition (SCADA) is an industrial control system monitoring and
controlling a process, or separate control systems. A SCADA system is only a software application. A
SCADA System usually consists of the following subsystems (www.wikipedia.org 2006):
1. a Human-Machine Interface or HMI is the apparatus which presents process data to a human
operator, and through this, the human operator, monitors and controls the process,
2. A supervisory (computer) system, gathering (acquiring) data on the process and sending commands
(control) to the process,
3. Remote Terminal Units (RTUs) connecting to sensors in the process, converting sensor signals to
digital data and sending digital data to the supervisory system.
4. Communication infrastructure connecting the supervisory system to the Remote Terminal Units,
5. an integrated alarm system.
A SCADA system must consists of a lot of software modules, at least a monitoring module to get the
information from the process and a control module to write information back to the process. Figure
2.1 shows the most common software modules (or sub modules) in a SCADA system.
Some of the important SCADA subsystems (or submodules) can be:
2.1.1 User Interface (UI)
A User Interface is the device or module which presents process data to a human operator, and through
which the human operator controls the process.The User Interface (UI) is also known as the:
1. Graphical User Interface (GUI); today almost userfaces are graphical,
2. Man-Machine Interface (MMI), the same as an UI,
3. Human-Machine Interface (HMI), the same as an UI.
A UI can be some simple devices like buttons and lamps up to complex systems on several computer
screens or an overhead screen. The presentation of the process information is very important to let the
operator focus on the important information only. The separation in information levels is a common
solution for control system. The system has an overview presentation and the operator can select a specic
part to get more detailed information. Depending on the size of the plant and type of information there
can be several layers of information. An exanple of such a system is shown in Figure 2.2. Figure 2.3
shows 2 computer screens with a graphical UI with process information and control options for a plant.
Figure 2.4 shows an overview GUI of a process or plant, and a set of GUI with details from several parts
of the same process or plant.
The amount of information on the screen has to be adopted to the usage of the system and it is
important not to overload the GUI with information. Figure 2.5 shows two dierent modes of presentation
of the same information. Which presentation will be best for the operator?
9
CHAPTER 2. SCADA 10
Figure 2.1: An overview of some of the software modules that can be part of a SCADA system.
2.1.2 Database
A database is a structured collection of data that stored in a computer system. The structure is achieved
by organizing the data according to a database model(www.wikipedia.org 2006). The model used most
today is the relational model, but other models exist such as the hierarchical model or the network model.
In a process system is the database used for:
1. conguration or setup data; how is the process system structured with references to the input and
output of values,
2. runtime data; the current values of the process system,
3. historical data; a history of the current values of the process system.
2.1.3 Alarm system
A process system needs an alarm system for presenting the alarms and the actions from the operators.
The alarm system should be an integrated system part of the monitoring and control systems to give an
overview of all the alarms.
2.2 SCADA Overview
Figure 2.6 shows how these devices, functions, and/or modules can be interconnected in an industrial IT
system. An industrial IT system can however combine the devices and/or modules in any combination
and this course will try to give a background for understanding these devices and the dierent ways of
interconnect them. A SCADA system is a software module running on a industrial computer, often with
an alarm system, a database, and an UI. The Scada system will communicate with a set of external
devices, denoted as RTU, being a DCS, a PLC, or a PAC depending on the control puproses. The RTU
is physical devices, hardware devices, working as distributed modules in the process system. The PID
module can be a software module of these RTUs. These RTUs will be standalone units, but the SCADA
system can interact with these devices to monitor the operator or changing the control parameters.
The layered interconnection between the ERP system, the MES, the IMS, and the SCADA system
is shown in Figure 2.7. This gure shows the layer priorites of the systems and the information used in
these systems. The ERP, the MES, and the IMS all depends on the SCADA systems, the informations
used and the decisions taken in the ERP, the MES, and the IMS depends on trustworthy informations
from the SCADA system. The management system will be useless if the SCADA is not able to deliver
trustworthy informations.
CHAPTER 2. SCADA 11
Figure 2.2: Several layers of information in a process system, from overview information about the plant
down to detailed information about a sensor device (Olsson & Rosen 2003).
CHAPTER 2. SCADA 12
Figure 2.3: A computer screen based UI for a plant (www.analogdevices.com: SEP-08).
Figure 2.4: The GUI of a SCADA system showing an overview of a total process (or plant) (www.abb.com:
dec-09).
CHAPTER 2. SCADA 13
Figure 2.5: The same type of information presented in two dierent systems. Which system will you
prefer to use?
Figure 2.6: An overview of a set of devices and interconnections that can be used in an industrial IT
system.
CHAPTER 2. SCADA 14
Figure 2.7: The layered connections between an ERP system, MES, IMS, and the SCADA system (from
John Baaserud, Baze Technology).
Figure 2.8: A gas power plant at Krst in Rogaland in the western part of Norway (Foto: Dag Magne
Syland/StatoilHydro).
An example of a plant using SCADA systems is shown in Figure 2.8. This is a gas power plant at
Krst in Rogaland, in the western part of Norway, consisting of several SCADA systems monitoring
and controlling the functions of the plant.
Some of the SCADA systems available are Wonderware, Citect, and iFix.
2.3 SCADA control and monitoring devices
A SCADA system is normally using a set of sub devices for monitoring and controlling a plant. The
size of the plant will decide the distribution of devices, but normally should the SCADA system be
independent of the physical monitoring and control of the plant. These devices can be RTUs, DCSs,
PLCs and PACs, and any combination of these devices can be interconnected to function as such a device
for the SCADA system.
2.3.1 RTU
Remote Terminal Units (RTU) are distributed computer systems in a larger control system where each
RTU will control and/or monitor olny a part of the plant. Often will a RTU be used for each closed
control loops for better maintaining the control strategies of the plant. The RTU will be an industrial
computer system consisting of (See Figure 2.9):
inputs; for analog and/or digital inputs,
outputs; for analog and/or digital outputs,
CHAPTER 2. SCADA 15
Figure 2.9: The modules of the RTU, with I/O for reading and writing values to process equipment like
sensors and actuators and communication to some sort of SCADA system.
Figure 2.10: A set of DCS are used for monitoring and controlling a plant. Each DCS can consists of a
single computer or a set of computers in network.
communication; for communication with the SCADA system (serial lines or network, wired or
wireless),
processor and memory,
software dedicated to the functions of the RTU.
The RTU will often use a real-time operating system and an operating system customized to the
operation of the RTU. The RTU is a standalone physical unit working without any interactions from
any remote systems, however the remote system (SCADA) can monitor the device and also change any
control parameters of the RTU. Available RTUs are the DCS, the PLC, and the PAC.
2.3.2 DCS
A Distributed Computer System (DCS) is a RTU used mainly for analog I/O to the control system. The
DCS often has a modular, distributed, but integrated architecture (Mackay, Wright, Park & Reynders
2004). The DCS is dedicated for a specic task in the control system, often some analog monitoring
and/or control loop. These devices may also have a small display for local UI (Mackay et al. 2004).
The DCS is used for control purposes in a distributes system, an industrial process will consists of a
set of DCS controlling only a part of the process. The DCS itself can be a single computer or a set of
computers in network depending on the complexity and/or requirements of redundancy for the control
system. To complicate the structure a PLC can also be part of the DCS network.
An examplee of a plant controlled by a number of DCS is shown in Figure 2.10. The DCSs are
connected in a network. Each DCS can consist of a single computer or several computers interconnected in
a private network or the common network. The network of computers for a DCS can be any combination
of computers for measurement and control, both DCS and PLC.
CHAPTER 2. SCADA 16
Figure 2.11: The operation of a PLC program. An image of the input bits will be copied to the memory,
the program will perform the operations depending on the input image in memory, generating an output
image. At the end of the cycle time will the output pins be updated.
2.3.3 PLC
A Programmable Logic Controller (PLC) is a RTU used mainly for digital I/O for the control system.
These devices are primarily used for sequence control based on on/o inputs and outputs, but today
these devices may also include analog I/O (Mackay et al. 2004). The PLC is often a single computer.
The PLC will have a cycle time, at a specic time a copy of the input state will be copied to the memory,
the PLC program will run and depending on the input states and the program, an output state will be
generated in memory. At the end of the cycle time will the output state image be copied to the output
pins. This way will the input state be steady when the program is running, but there will always be a
delay from the input states to the output states in a PLC system. This is shown in Figure
2.3.4 PAC
A Programmable Automation Controller (PAC) is a compact controller that combines the features and
capabilities of a PC-based control system with that of a typical PLC. A PAC provides the reliability of
a PLC and the task exibility and computing power of a PC.
2.4 RTU Subsystems
2.4.1 Open or Closed Control Loops
Control systems with or without a monitoring part, open control loops without any feedback from the
process and closed control loops with feedback (monitoring) from the process.
2.4.2 PID
A proportionalintegralderivative controller (PID controller) is a generic control loop feedback mech-
anism (controller) widely used in industrial control systems. A PID controller attempts to correct the
error between a measured process variable and a desired setpoint by calculating and then outputting a
corrective action that can adjust the process accordingly(www.wikipedia.org 2006).
The PID controller calculation (algorithm) involves three separate parameters; the Proportional, the
Integral and Derivative values. The Proportional value determines the reaction to the current error,
the Integral value determines the reaction based on the sum of recent errors, and the Derivative value
determines the reaction based on the rate at which the error has been changing. The weighted sum of
these three actions is used to adjust the process as shown in Figure 2.12 where 1

is the proportional
factor, 1
I
is the Integral factor, and 1
J
is the derivate factor.
The PID function or controller is controlling the process using some type of analog actuator(s). The
PID function is often a part of the functinality of the RTU in the SCADA system, only a software
module. Some applications may require using only one or two modes to provide the appropriate system
CHAPTER 2. SCADA 17
Figure 2.12: An overview of the PID controller (www.wikipedia.org 2006).
control giving a PI, PD, or just a P controller. The PID function is often a software module of the DCS,
the PLC, and the PAC devices.
2.4.3 CNC Machines
Computer Numerical Control (CNC) machines are using programmed operations for machine tools.
These machine tools, powered mechanical devices, are operated only by the software program downloaded
into the device.
2.4.4 Robots
Robots are exible tools in automated systems.
2.4.5 Instrumentation
Instrumentation consists of both sensors and actuators; sensors for monitoring the process and actuators
for controlling the process. Figure 2.13 shows a part of a plant with pipes, sensors, and actuators. The
actuators are mainly valves, the orange devices. Some of the sensors are used for feedback information
of the valve positions, small green devices on top of the valves.
2.5 Superior SCADA systems
2.5.1 ERP
Enterprise resource planning (ERP) is an enterprise-wide information system designed to coordinate all
the resources, information, and activities needed to complete business processes such as order fulllment
or billing (www.wikipedia.org 2006).
An ERP system supports most of the business system that maintains in a single database the data
needed for a variety of business functions such as Manufacturing, Supply Chain Management, Financials,
Projects, Human Resources and Customer Relationship Management. The following are steps of a data
migration strategy that can help with the success of an ERP implementation (www.wikipedia.org 2006):
1. Identifying the data to be migrated,
2. Determining the timing of data migration,
3. Generating the data templates,
4. Freezing the tools for data migration,
5. Deciding on migration related setups,
6. Deciding on data archiving.
CHAPTER 2. SCADA 18
Figure 2.13: A part of a plant with several pipes, valves as actuators, and sensors reading the valve
positions (Photo from Automatisering 07/2009).
2.5.2 MES
Manufacturing execution systems (MES) serve as the intermediary between a business system such
as ERP and a manufacturers plant oor control equipment. MES is helping to manage production
scheduling and sequencing, creating an audit trail for track and trace, and delivering work instructions
to shop oor workers. MES can also be named as Operations Management Software (OMS).
The key focus with a MES system is traceability, to be able to gure out :
1. where a product is manufactured,
2. when a product is manufactured,
3. any sub devices used to manufactured this product,
4. any claims, warnings, or errors of this product.
2.5.3 IMS
Information Management System (IMS) is an information that makes low level information available on
all levels in the organization. This system is often divided into the subsystems:
Laboratory Information Management System (LIMS); Information system for lab data; samples,
analysis, results, instruments managing etc.,
Process Information Management System (PIMS); Information system with focus on process data
and data acquisition to a real-time database, in the process industry is the PIMS and the IMS the
same system
2.6 Security
A SCADA system must also contain control logic for safety and shutdown functions. Such a system can
also be named a SAS (Safety and Automation System)
CHAPTER 2. SCADA 19
2.6.1 Safety system
Safety Integrity Level (SIL) is requirements for the processing chain; reading, evaluating, and responding.
Regarding a process system or a SCADA system will the SIL be looked upon as the sensor devices, the
processing units (RTUs), and the actuators. The SIL consists of a set of numbers with 1 as the lowest
level. The weakest part in a chain will decide the SIL for the whole chain. One way to have a higher SIL
is to use failsafe controllers. These controllers will evaluate safety relevant eld signals and switch to or
stay in a safe condition in the event of faults (PROFIsafe 2009). In a failsafe controller will safety oriented
operations be processed in two dierent paths (algorithms) and the results are compared at the end of
the algorithms. If any deviations, a fault has occured in one of the paths, and the controller will switch
to a safe condition. These controllers must have extensive self-diagnostic facilities (PROFIsafe 2009).
The higher the degree of automation, the more the control needs to be monitored for safety, but this
is only possible if the system is failsafe (PROFIsafe 2009). Safety will be regarding hardware, software,
and communication modules and devices in the system.
There exists a lot of dierent standards and regulations regarding the safety matter for equipment, we
humans, and the environment. Some international, other local to dierent countries, or local extensions
or limitations to international standards and regulations.
IEC61508 Safety Standard
IEC61508 is an international standard focusing on safety-related systems that incorporate electrical,
electronic and/or programmable electronic (E/E/PE) instruments and devices. This standard is mainly
used in the automation and process control industry, but is more and more accepted for applications in
other industries.
The IEC 61508 standard is divided into 7 parts:
1. General requirements,
2. Requirements for (E/E/PE) safety-related systems,
3. Software requirements,
4. Denitions and abbreviations,
5. Examples of methods for the determination of safety integrity levels,
6. Guidelines on the application of IEC 61508-2 and IEC 61508-3,
7. Overview of measures and techniques.
2.6.2 Shutdown system
The function of a shutdown system is to protect environment, plant and humans in case any state of the
process goes beyond predened boundaries. A plant normally have several levels of protection like:
Process Control System (PCS) for daily opeation of the plant, using the the alarm system of the
SCADA system,
Process Shutdown System for controlled process shutdown (PSD),
Emergency Shutdown System in case of an emergency situation (ESD),
Fire & Gas System (FGS) to detect re and initiate automatic shutdown,
Mechanical devices like Pressure Safety Valves (PSV) to avoid overpressure when previous systems
fail.
CHAPTER 2. SCADA 20
Figure 2.14: The development system for a SCADA system consists of a set of standard modules, and
can be extended with developed modules for a specic plant, process, or production system.
2.7 Documentation
A SCADA system can be documented in dierent ways. Some of the diagrams are:
1. P&ID (Process and Instrumentation Diagram); a family of functional one-line diagrams showing
hull, mechanical and electrical systems like piping, instrumentation and cable block diagrams,
2. P&ID (Piping and Instrumentation Diagram/Drawing); a schematical diagram showing piping,
equipment and instrumentation connections within process units (www.wikipedia.org 2010),
3. SCD (System Control Diagrams); mainly for SAS, integration of the control logic in order to more
easy be able to check the logic connection to other areas in the system.
The documentations are using dierent types of levels and symbols to show the contents of the control
and monitoring systems.
2.8 Development
A SCADA system has to be developed and congured to a specic plant, process or production systems.
A modern SCADA system like System Platform from Wonderware (2010), Procy iFIX (Interlotion)
from General Electric (GE)(2010), CitectSCADA or ClearSCADA from Schneider Electric (2010) consist
of a set of standard modules and building blocks. The building blocks are normally object oriented
building blocks meaning that a new system consists of the standard modules and any combinations
of the building blocks. New or extended building blocks can normally easy be developed using object
oriented development principles. The developed building block consists of system functions (business
layer) and specic HMI elements. Figure 2.14 shows a SCADA system with the standard modules and
the developed modules, often the developed modules are mainly for the business layer, but also for
communication with special hardware devices and specic HMI elements.
Since SCADA systems can be any size and combination of devices, the SCADA software must be
congured to each process systems.
2.9 Future
Analysis from Frost & Sullivan
1
has shown that the SCADA market in Europe is worth about $1325
million in 2009. The same marked is estimated to reach about $1900 million in 2016. The analysis
covered the areas of oil and gas, power, water and wastewater and others, including plant-level SCADA
(food and beverage, pharmaceuticals, chemicals and pulp and paper) and automotive and transportation.
The reason for the increasing popularity is the standardization of systems and building blocks used in
the system and provide operational eectiveness for a relatively low capital investment.
1
According to Control Engieering Europe (www.controlengeurope.com) 5-OCT-2010. See also www.frost.com.
CHAPTER 2. SCADA 21
One of the big challenges confronting SCADA is that of cyber security. Better education of plant-level
operators and engineers as well as system integrators and other SCADA developers about the benets
and importance of providing security is also necessary to ensure system security.
Part II
OPC
22
Chapter 3
Introduction
3.1 Background
In every area of the industry there is a move from proprietary solutions to open vendor independent
standards. As well as reducing costs it allows the choice of components according to their performance
and reduces the dependence on suppliers. The important part in data communication is the protocol
dening a set of rules for data exchange between software applications. Figure 3.1 shows 2 computers
exchanging information using a wired comunication link. The protocol should dene how the computers
will exchange this information and the specication of the information in the data message sent between
the computers. The Field Bus dened how several computers can be connected to the same network in
a industial system. The bus protocol must dene the structure of the data message sent between the
computers, but also how to use the bus as a common communication link. Only one computer can
transmit data on the bus at a time. Figure 3.2 shows several computers connected on a bus using
a bus protocol for controlling the communication between the computers. Very often will the protocol
dene the structure of the communication between the computers and the contents of data messages to
some degree. However are the details about specic data values and the meaning of specic bits in the
protocols not dened clearly. How can the software applications in a network system exchange a specic
data value and know the status of this data value? This is one of the reasons for the OPC standard.
The specication of the Dynamic Data Exchange (DDE) protocol from 1987 provide a rst solution
for the data exchange between MS-Windows based applications. The main drawback of this solution was
low bandwidth not very well suited for real-time systems. High bandwidth will be a major requirement
for automation systems where exchange of data will be important. See Figure 3.3.
The specication of the Object Linking and Embedding (OLE) protocol, from 1990, a distributed
object system and protocol, provided better bandwidth. OLE is said to be the evolution of the DDE.
While DDE was limited to transferring limited amounts of data between two running applications, OLE
was capable of maintaining active links between two documents or even embedding one type of document
within another. The main benet of using OLE, next to reduced document size, is the ability to create a
master document. References to data in this document can be made and the master document can then
have changed data which will then take eect in the referenced document. See Figure 3.3.
The OLE protocol later evolved to become an architecture for software components known as the
component object model (COM) given that the documents can be objects as well. Both OLE and COM
was developed for communication on a single computer, but the Network OLE protocol as later evolved
to the Distributed Component Object Model (DCOM) protocol for software components distributed
across several networked computers to communicate with each other.
The usage of DCOM and OLE was used for developing the open standard OLE for Process Control
Figure 3.1: A protocol is needed when 2 computers are going to exchange information.
23
CHAPTER 3. INTRODUCTION 24
Figure 3.2: A protocol is needed when several computers are going to exchange information. The
protocols must dene the structure of the data messages sent between the computers, but also how the
computers should cooperate to use the bus as a common communication link.
Figure 3.3: The background for the OPC standard.
(OPC), as the original name for an open standard specication developed in 1996 by an industrial au-
tomation industry task force. The standard species the communication of real-time plant data between
control devices from dierent manufacturers. The background for the OPC standard is shown in Figure
3.3.
After the initial release, the OPC Foundation
1
was created to maintain the standard. Today the OPC
standard is a series of standard specications and the standard is often called Open Process Control
(OPC).
3.2 Operating systems
The OPC Specication was based on the OLE, COM, and DCOM technologies developed by Microsoft
for the Microsoft Windows operating system family. The specication dened a standard set of objects,
interfaces and methods for use in process control and manufacturing automation applications to facilitate
interoperability.
OPC was designed to bridge Windows based applications and process control hardware and software
applications. It is an open standard that permits a consistent method of accessing eld data from plant
oor devices. This method remains the same regardless of the type and source of data.
COM and DCOM was a proprietor protocol from Microsoft meaning that OPC rst cound only be
used on systems based on the Windows operating system. Several other operating systems are now
1
The WEB site: http://www.opcfoundation.org (4-FEB-07).
CHAPTER 3. INTRODUCTION 25
Figure 3.4: A 3 layers software application.
supporting the DCOM protocol like Solaris (Sun), Unix, VMS (Digital), Linux and AIX (IBM) gving
that OPC can be used on dierent operating systems.
3.3 Software application
The term SCADA refers to a large-scale, distributed measurement (and control) system, see Figure 3.9.
A SCADA system includes input/output signal hardware, controllers, HMI, networks, communication,
database and software (www.wikipedia.org 2006).
Programmable automation controller (PAC) is a compact controller that combines the features and
capabilities of a PC-based control system with that of a typical programmable logic controller (PLC), see
Figure 3.8. PACs are most often used in industrial settings for process control, data acquisition, remote
equipment monitoring, machine vision, and motion control. Additionally, because they function and
communicate over popular network interface protocols like TCP/IP, OLE for process control (OPC) and
SMTP, PACs are able to transfer data from the machines they control to other machines and components
in a networked control system or to application software and databases (www.wikipedia.org 2006).
The SCADA and PAC software often consists of a 3 layer application model with the following layers
(see Figure 3.4) :
1. GUI; the presentation layer,
2. Business layer; the calculation, monitoring logic, analysis and so on,
3. Data layer; the process data, events, alarms and so on.
The SCADA and PAC software is a distributed measurement system getting the information from
dierent distributed computer equipment (DCE) so a 3 layered software application on a SCADA or
PAC system can be as shown in Figure 3.5.
A complex software system and the exchange of data will depend on a lot of dierent protocols. What
if a new function has to be integrated into the system? To solve the problems regarding the exchanging
of data and connection between all the modules, one of the solutions can be OPC. One solution of the
system in Figure 3.5 is shown in Figure 3.6.
As can be seen in Figure 3.6 the system has a much easier structure and any extensions or changes are
easy as the information will be available at the OPC protocol. The drawback is that every system must
have support for the OPC protocol. A more specic gure is shown in Figure 3.7 showing 2 applications
as OPC clients using informations from 3 OPC servers.
The OPC server will be the software module with some sort of data access and will have one or
several protocols for interfacing the I/O hardware modules. The server will have the OPC protocol
towards the other software modules in the system. All OPC clients must also have the OPC prototocls
for communicating the OPC servers. The complexity of the software will be much higher for the server
than the clients.
CHAPTER 3. INTRODUCTION 26
Figure 3.5: A general SCADA or PAC software application with a lot of dierent protocols between the
software modules (Krogh 2005).
Figure 3.6: SCADA or PAC software using OPC as protocol for interconnection (Krogh 2005).
Figure 3.7: Applications in a system with OPC clients and OPC servers.
CHAPTER 3. INTRODUCTION 27
Figure 3.8: A PAC system as a combination of a PC and a PLC system.
Figure 3.9: A SCADA system with distributed I/O as HMI, PLC, DCE, and PAC.
3.4 Communication model
A SCADA (Supervisory Control And Data Acquisition) or PAC (Programmable automation controller)
system will always be a distributed system and from a communication point of view a distributed system
consists of a service provider and a service user. A PAC system, as a combination of a PC and a PLC
(Programmable Logic Controller), is shown in Figure 3.8. A SCADA system, with distributed I/O as a
HMI (Human Machine Interface), PLC, DCE (Distributed Computer Equipments), and PAC is shown in
Figure 3.9. As shown can a PAC be part of a SCADA system indication that a SCADA system normally
will be a more complex system than a PAC system.
In these system there will be a lot of information, and some subsystem has the information and other
subsystems need this information. The service provider and service user must be logically connected
for the user to get information from the provider. The service user will ask the service provider for
information. See Figure 3.10.
This logical connection can be described using two dierent models:
1. Client / server model,
2. publisher / subscriber model.
OPC supports both models and distinguishes only between synchronous and asynchronous services.
In asynchronous services can another request be answered before this answer, there is no relationship
between the requests from the clients and the answers back to the clients.
3.4.1 Client/server model
In a client/server model the server is the owner of the data (or resource) and the client, or clients, must
poll the server to exchange data. The advantage of a client/server model is that several clients can access
the data (or resource) at the server at the same time, using no server only one client can access the
Figure 3.10: User and provider messages.
CHAPTER 3. INTRODUCTION 28
Figure 3.11: Client/server model with a request and a response.
Figure 3.12: Several clients connected to a server.
data (or resource) at a time. Remember that you should not have copies of your data in the process
system, the data should be available in only one location and the client/server model is good way of
dealing with the data resources.
The communication between the client and the server is determined by the OPC protocol. The
sequence always starts with a client sending a service request to the server. The answer from the server
will be a service response. Figure 3.11 shows the request and response in a client/server model.
Normally will several clients connect to a server as shown in Figure 3.12. One problem with the
client/server model is the polling (requests) from the client, the client should request a new set of data
every time the data set is used. In real-time system this can be a lot of unnecessary requests, a better
way can be the usage of the publisher/subscriber model.
3.4.2 Publisher/subscriber model
The publisher/subscriber model assumes a cyclic data supply by the publisher, where the data transfer
from the publisher depends on either an external request or an internal event (e.g. a timer). The client
or the clients must rst subscribe to data and dene the type of subscription (request and/or events),
and the server will give a response when the type of subscription is activated without the need of polling
from the client. One or more clients can subscribe of the same type of data, as shown in Figure 3.13
Will the publisher be a client or a server? The server is the owner of data and the most logical
solution will be to let the server also be the publisher.
The dierences of these communication models are shown in Figure 3.14 showing the messages in
the time domain. Notice the number of messages, the client/server model needs more network trac to
get the same number of information from the server.
The client / server model can be either synchronous or asynchronous. These types of read and write
operations are shown in more detail in Figure 3.15 where the dierent read and write operations will
depend on the implementation of the clients.
Figure 3.13: A publisher/subscriber model with 1 publisher and 2 subscribers.
CHAPTER 3. INTRODUCTION 29
Figure 3.14: The client/server model shown on the top and the publisher/subscriber model shown at the
bottom, both in the time domain. Notice the dierence in trac load between these models.
Figure 3.15: The synchronous, asynchronous, and subscription based read and write operations between
a OPC server and a OPC client. Client/server types on the top and publisher/subscriber type at the
bottom (Kirrmann 2007).
Chapter 4
OPC specication
4.1 Introduction
The OPC protocol consists of a set of specications:
1. Common,
2. Data access (DA),
3. Alarm and events (AE),
4. Data exchange (DX),
5. Historical Data Access (HDA)
6. Batch,
7. Complex data,
8. XML,
9. Security,
10. Commands,
11. Unied Architecture (UA).
The specications are developed in Working Groups in the OPC Foundation
1
and only members will
have access drafts, pre-releases and info from the working group. The released specications are however
available for everybody
2
. The specications are often dealing with the data on the server side, giving a
set of dierent servers for dierent specications.
The interconnection between the dierent specications is shown in Figure 4.1.
Each specication is a description of a server, a software module that can be running on a node in
the system. Note that several servers can be running on the same node in the system. In small systems
will most probably all the servers be running on the same node. Figure 4.2 shows the usage of some of
the OPC servers, the connection to the OPC protocol and the plant. Note that the OPC protocol is the
connections between the systems and also the connection point for the OPC clients.
4.2 Communication
COM and DCOM is the basis for the OPC comunication and the COM/DCOM connection consists of:
1. Objects,
2. Interfaces.
30
CHAPTER 4. OPC SPECIFICATION 31
Figure 4.1: The interconnection of the OPC specications.
Figure 4.2: The connections between some of the OPC servers in a process system.
Figure 4.3: A DCOM client and server with a interface connection.
CHAPTER 4. OPC SPECIFICATION 32
Figure 4.4: Local communication in the same process (application).
Figure 4.5: The interprocess communication between the OPC client and the OPC server using COM.
The interface is connection point between the client and server object, the client is only able to see
the contents of the DCOM server by the interface connection, see Figure 4.3. The DCOM server will be
like a black box for the client as the client do not know anything about the funconality of the server.
A set of functions are available at the main interface, like:
1. Addref(); add the reference of the client,
2. Release(); get the release number of the server object,
3. QueryInterface(); request information about the functionality of the server.
4. The COM/DCOM solution gives the same functionality both on local and remote systems.
The communication between the client and the server will be the responsibility of the client to nd
the interface of the server component and connect. If the client and the server is in the same software
application, which is not very likely, but possible, the client will connect directly to the server, as shown
in Figure 4.4. All the code for the application will be in the same process giving a fast and direct
connection. This solution is only possible for very special systems, and will not utilize OPC. A more
usual way of doing the communication is between two dierent processes on the same computer. This
communication is often called InterProcess Communication (IPC) and a lot of dierent IPC exists, also
called Middleware. OPC is using COM and DCOM as the IPC, and is shown in Figure 4.5. The COM
module is used as the IPC channel for the OPC on a local machine.
IPC can also be used between processes on dierent computers but the IPC must then support
network connections as well. COM is only for IPC on the same computer while DCOM can be used
communication between two processes on two dierent computers. The usage of DCOM is shown in
Figure 4.6.
The network support can be dierent protocols as well and a set of protocols that DCOM can use is
shown in Figure 4.7.
The most used OPC standards in the process industry are:
1
The WEB site: http://www.opcfoundation.org (4-FEB-07)
2
The specication is avaliable at: http://www.opcfoundation.org/ ! Downloads ! Specications.
Figure 4.6: Usage of DCOM for COM communication between two computers.
CHAPTER 4. OPC SPECIFICATION 33
Figure 4.7: Network connection in a DCOM.
1. OPC DA (Data Access),
2. OPC AE (Alarm & Events),
3. OPC HDA (Historical Data Access).
4.3 OPC Common
Usage:
1. common denition for several of the OPC specications,
2. instruction for registration of OPC software modules.
OPC Common interfaces:
1. IOPCServerList; Find the OPC servers on a computer,
2. IOPCCommon; let the client dene the language,
3. IOPCShutdown; callback to the client,
Registration of OPC software modules: using the Windows registry (or ini les).
4.4 OPC Data Access (OPC DA)
The current Data access specication 3.0 has 19 interfaces and 69 methods (functions). Specication
1.0A is from 1997, specication 2.0 from late 1998. The functions will dier from dierent specication
so it will be important to know the specication number of the DA server. The usage of the OPC DA is:
1. reading of measurement values,
2. calculation and estimation of values,
3. writing of values,
OPC DA server has implemented a set of services, and the clients are using these services. Figure
4.8 shows an example of a system using an OPC DA server.
Tags are used a lot in the process industry and is normally assigned to a piece of information. A
tag consists of a name describing a single point of information meaning that a process system (plant)
consists of hundreds or even thousands of tags. The gure shows that the DA server contains one tag for
each measurement point and controller point in the plant, and it is the responsibility of the DA server
to get (or set) the information from the controllers. This is one of the reasons for the complexity of the
servers, they need to have drivers for a lot of controllers and/or measurement systems.
The OPC servers has dierent rooms for grouping of items and adding access and names, and the
client can use group index instead of an item index. The group concept is important and one or more
CHAPTER 4. OPC SPECIFICATION 34
Figure 4.8: An example of a system with an OPC DA server and OPC clients using tags for the I/O
values. Note that the OPC DA server will not use OPC for communication with the I/O devices.
(Kirrmann 2007).
items can be added to the same group. The group information is stored on the server, but the server
lets the client maintains the group information, and the client can also browse in the name space of the
server. See Figure 4.9 where the name space contains information about the items on the server. The
name space is the area in the server where all the group and tag information are stored.
A more detailed Figure is shown in Figure 4.10 showing the tree structure of the groups and tags
information in a OPC DA server. The information is stores in a root, several levels of branches containing
the groups, and a leave level containing the tags. Each tag indicates a specic point of measurement or
controller for a process.
Exchange of measurement values can be done by groups, or items belonging to a group. The
read/wrtie operations between the server and the client to access the values can be:
1. reading or writing synchronous,
2. reading or writing asyncronous,
3. reading as a subscription.
Figure 4.9: A client connection with a DA server group with a set of I/O items.
CHAPTER 4. OPC SPECIFICATION 35
Figure 4.10: The name space information of a DA server (Kirrmann 2007).
Figure 4.11: A deadband meaning that the output will not change even if the input is changing (system
is dead).
The subscription based operation can depend on the following settings:
1. deadband variations (in %); The deadband is an area of a signal range (or band) where no action
occurs (the system is dead), see Figure 4.11. Deadband is a way of data compression, remove some
of the data but keep the information,
2. minimum time interval (in seconds),
3. for each item in the group, or the whole group.
A sample of a value saved in the OPC DA server has the following descriptions:
1. value (only the current value, no history),
2. quality; like GOOD, BAD (unknown error), CONFIG_ERROR, DEVICE_ERROR, SENSOR_ERROR,
COMM_FAILURE, ...,
3. timestamp (Coordinated Universal Time (UTC) / Greenwitch Mean Time (GMT))
4. access rights,
CHAPTER 4. OPC SPECIFICATION 36
5. properties; SI unit, scaling, description,..
The server will have a lot of interfaces and methods that can be used by the clients.
Some of the Server interfaces are: IOPCCommon, IOPCServer, IOPCServerPublicGroups, IOPCBrowserServer-
AddressSpace.
Some of the IOPCServer methods are: Addgroup, GetErrorString, GetgroupByName, GetStatus,
Removegroup, CreateGroupEnumerator.
Some of the interfaces for the group object are : IOPCItemMgt, IOPCGroupStateMgt.
Some of the IOPCItemMgt methods are: AddItems, ValidateItems, RemoveItems, SetActiveState,
SetClienthandles, SetDataTypes.
4.5 OPC Alarms & Events (OPC AE)
Usage:
1. monitoring of events,
2. reports of events.
Meaning:
1. discrete alarms,
2. level alarms; change in the process value,
3. warnings,
4. informations.
The meaning of the OPC AE server will then be:
alarms on sensor devices,
alarms on sensor values/data,
alarms on control parameters,
status on hardware connections,
status on systems and subsystems.
Usage of the OPC AE server will be:
1. Detections of alarms and/or events from one or more sources,
2. publishing to one or more clients using subscription (including a lter),
3. type of clients can be GUI systems and separate alarm systems.
3 dierent type of events:
1. simple, a simple event in the system,
2. condition, a condition in the system, can be several events,
3. tracking, a external event often by an operator or an external system.
An example of a OPC client having a event subscription on a OPC EA server is shown in Figure
4.12.
The structure for the connection between a client and an OPC AE server is shown in Figure 4.13
showing the connection sequence. The sequence will be:
1. the client connects to the AE server,
2. the client set up a subscription request to the AE server, getting a connection point (CP),
3. the client will congured the connection point (CPC),
4. the connection point (CP) will send an event when the condition of the conguration of the con-
nection point.
CHAPTER 4. OPC SPECIFICATION 37
Figure 4.12: A OPC client and a OPC AE server with a group for condition events.
Figure 4.13: The connection sequence between a client and a AE server in an OPC system.
4.6 OPC Data eXchange (OPC DX)
Usage:
1. Conguration and data exchange between dierent systems,
2. Use existing specication if possible,
3. dene a standard for system conguration.
OPC Data Access is often used for vertical information exchanging, while OPC Data eXchange is
often used for horizontal information exchange. This means that OPC DA is used between servers and
clients and OPC DX is used between servers, shown in Figure 4.14.
A OPC DX server is an OPC DA server with DX extensions. The extension is 2 dierent items:
1. readable; a data source,
Figure 4.14: The normal dierences of OPC data Access and OPC Data Exchange.
CHAPTER 4. OPC SPECIFICATION 38
Figure 4.15: The readbale / connectable connection between OPC DA and OPC DX servers.
2. connectable; a data destination.
A connection is made between the readable and the connectable, the connection is saved on the
destination side, and it is the responsibility of the OPC DX server to read the value of the readable on
the OPC DA and update the connectable value. See Figure 4.15.
4.7 OPC Historical Data Access (OPC HDA)
The reasons/usages for a OPC HDA server:
1. Reading of historical values,
2. Tools for historical values,
3. Tools database clients.
Server functions:
1. reading and writing data for process and time series database,
2. access of the name space of the DA server,
3. historical values with attributes, timestamps and quality,
4. support for annotation and aggregation,
5. support for replay (playback) of historical values.
The HDA specication can give a range of extra server functionallity from a simple server for reading
of trend data only, to a complex server with a lot extra functions.
Value attributes for saving a new sample:
1. maximum time interval; a new value has to be saved after this time interval,
2. minimum time interval; a new value shall NOT be saved during this time interval,
3. exception deviation; minimum change for saving a new value,
4. exception deviation type; absolute value, percent of new value, or percent value of value span
(HighEntryLimit - LowEntryLimit),
5. High Entry Limit; the upper limit for a valid value,
6. Low Entry Limit; the lower limit for a valid value.
Timestamps:
1. absolute time; reference is UTC (Universal Time Coordinated Time, same time as the old GMT),
2. relative time;
(a) Keywords: NOW, YEAR, MONTH, WEEK, DAY, HOUR, MINUTE, SECOND
(b) Syntax: Keyword Oset
(c) Oset: Y, MO, W, D, H, M, S
CHAPTER 4. OPC SPECIFICATION 39
Figure 4.16: The contents of a batch procure.
3. Example: daily report:
(a) Start=DAY-1D
(b) Stop=DAY
Annotation (Comments):
1. text,
2. username,
3. timestamp.
Aggregation:
1. Calculation only on request,
2. no saving of the calculated values,
3. Optional extension,
4. Types: Average, Minimum, Maximum, Start, Stopp, Count, etc.
Playback:
1. Playback of the historical data from the OPC HDA server to a client,
(a) dene the speed and duration,
(b) dene values and aggregation.
2. Useful for testing, simulation, and teaching,
3. Optional extension.
4.8 OPC Batch
Batch is the execution of a series of programs (jobs) on a computer without human interation. A batch
consists of a set of operations as shown in Figure 4.16.
The denition of batch process (Furenes 2009): Processes that lead to the production of nite
quantities of material by subjecting quantities of input materials to a dened order of processing actions
using one or more pieces of equipment
Some characteristics of a batch process (Furenes 2009):
CHAPTER 4. OPC SPECIFICATION 40
Figure 4.17: A batch process with input variables, output variables, measured variables, and manipulated
variable (Furenes 2009).
1. Run intermittently to produce low-volume and high-value products,
2. Have dynamic nature of operation, no steady state,
3. Have nite time of operation,
4. Frequent repetition of the same process.
The reasons for the OPC batch server:
1. S88 standard for batch control (IEC 61512-1)
2. Easy to congure batch processes,
3. Easy to operate batch processes.
An overview of a batch process is shown in Figure 4.17. The process will have some input variables,
output variables, measured variables, and manipulated variables. The run time of one batch and the
batch run index will be two time variables.
The S88 standard contains:
1. Prescription handling,
2. Production planning,
3. Process control,
4. Monitoring.
Examples of batch processes:
1. making a report; the process must collect a set of data, organize the data, make the report pages,
and send the pages to the printing system,
2. baking a cake; the process is shown in Figure 4.18.
CHAPTER 4. OPC SPECIFICATION 41
Figure 4.18: Baking a cake; an example of a batch process (Furenes 2009).
4.9 OPC Complex Data (OPC CD)
The reason/usage for a OPC Complex Data server:
1. Able to use more complex data types then OPC DA,
2. Structure of simple items or complex items,
3. The client should read both the structure and the values,
4. Only extensions to the OPC Data Access.
Complex data:
1. Consists of simple or complex items,
2. Unlimited number of nested levels,
3. Structures can be:
(a) arrays,
(b) structures (database records),
(c) arrays of structures,
(d) arrays and structures.
Figure 4.19 shows the usage of a OPC CD server to extend the OPC DA server with complex data
structures for the OPC clients.
4.10 OPC Security
The reasons for security control:
1. Control of the access of data in the system,
(a) Requirement for physical security of data,
CHAPTER 4. OPC SPECIFICATION 42
Figure 4.19: The OPC clients are using complex data structures when communication with the DA
server. The DA server is extended with a CD server to support these complex data structures.
(b) Requirement for condence of data.
2. OPC is an open standard
(a) Anybody can make an OPC client and access data,
(b) Using wireless network no physical connection is necessary.
OPC is based on security in Windows:
1. User access in the system (principals)
2. User has to be member of a group (principals)
3. Access certicates,
4. Security objects,
5. Access control lists,
6. Reference monitor,
7. Communication channels,
8. Autorization,
9. Impersonation (being another principals).
COM/DCOM objects:
1. Security objects,
(a) the principal of the client must have access to the COM/DCOM objects,
(b) using subscriptions the principal of the server must have access to the client,
(c) using the application DCOMcnfg.exe for access conguration.
The DCOMcnfg.exe application is shown in Figure 4.20.
Recommended security settings for OPC servers (Krogh 2005):
1. Authentication Level: Connect,
2. Impersonation Level: Identify,
3. OPC servers should be running on one specied account,
CHAPTER 4. OPC SPECIFICATION 43
Figure 4.20: The DCOMcnfg.exe application.
4. Use DCOMcnfg to allow users to access the servers,
5. Use DCOMcnfg to allow users to start the servers.
An OPC server can use 3 dierent levels of security:
1. No security,
2. DCOM security: security on users, no security access on objects,
3. OPC security: No security or DCOM security, and security access on objects.
4.11 OPC XML-DA
The reasons/usages for a OPC XML Data Access server:
1. better integration of system not tight connected,
(a) system in dierent operating systems,
(b) system in dierent application domains,
2. better usage of OPC on internet.
Extensible Markup Language (XML) :
1. text based,
2. rules for structured information,
3. focus on information, not presentation.
XML example:
<STUDENT
<NAME
<FIRST ABC </FIRST
<MIDDLE DEF </MIDDLE
<LAST GHI </LAST
</NAME
<UNIVERSITY
<CODE TUC </CODE
<WEB www.hit.no </WEB
</UNIVERSITY
</STUDENT
CHAPTER 4. OPC SPECIFICATION 44
Figure 4.21: The message sequence in a XML-DA server.
SOAP:
1. Simple Object Access Protocol,
2. Communication protocol,
3. XML based,
4. OPC XML is using SOAP.
Operation mode using XML will be:
1. The OPC client is making a XML document (of input from the user),
2. Sending the XML document to the server,
3. The server extract the information from the document,
4. Making new information based on the client information,
5. Sending a new XML document back to the client,
6. The client will present the information for the user or extract the necessary information.
The message sequence is shown in Figure 4.21.
OPC XML can be used instead of DCOM based OPC Data Access as DCOM protocols may have
problems with rewalls.
Will OPC XML replace DCOM based OPC?
1. SOAP/XML is basis for the new .NET communication technology,
2. SOAP/XML will be available on all new Microsoft operating systems,
3. Poor real-time support,
4. Overhead in the communication protocol.
4.12 OPC Command
Can be used for conguration of servers and control of state based operations, but often specic solutions
for a specic server. OPC commands are often XML based.
A command:
1. Takes a long time,
2. changes the state of the server.
CHAPTER 4. OPC SPECIFICATION 45
Figure 4.22: The contents of the OPC-UA specication. The specication contains a new comunication
standard and has a better integration of the OPC-DA, the OPC-HDA, the OPC-AE, the OPC-CD, and
the OPC-DX specications (www.opcfoundation.com: jan-09).
4.13 OPC UA
OPC Unied Architecture is a new architecture:
1. using protocols based on XML and .NET technology from Microsoft and will inuence all the OPC
specications. DCOM is often based on DLL (Dynamic Link Library) given a lot of problems with
dierent versions (DLL hell) and DCOM communications have problems through rewalls. Very
often most of the security has to be switch o the let the OPC system work.
2. with better integration between the dierent OPC specications as the OPC-DA, the OPC-DX,
the OPC-CD, the OPC-AE, and the OPCHDA specications. Today will normally a server be
installed for each of the specication, why not combing the specications in fewer servers?
3. for focusing more on services, the OPC-UA will be more based on the Service Oriented Architecture
(SOA) to focus more on services, not functions.
4. for better integration on non-Microsoft systems, allow an easier integration of system not having
a tight coupling. This includes systems like embedded systemes, systems communicating over
internet to meantion some.
Figure 4.22 shows an overview of the OPC-UA specication.
There are three primary factors that inuenced the decision for moving forward with the OPC UA
architecture (www.matrikon.com 2007):
1. The major OPC installation base, Microsoft is focusing their future eorts on Web Services and
SOA applications. In addition, there is increasing pressure from end users looking for OPC support
on Linux and other non-Windows platforms.
2. OPC is no longer a simple point-to-point solution, and is becoming the backbone of increasing
complex OPC architectures, that involve multiple specications. Vendors and users require a
single interface that exposes the key functional areas of OPC.
3. The OPC Foundation would get more and more requests from clients and other institutions to
leverage its standards to aid those who are dening other industry standards at a granularity
below the interface level.
Chapter 5
OPC system
5.1 Introduction
OPC consists today of a lot of standards and each of the standards are implemented as a server. An
OPC system will therefor consists of a set of servers having the data and a set of clients using the data.
These servers can be installed on one or several computers depending on the structure of the SCADA
system. The important aspect here is that every software module that should be integrated into this
SCADA system must support the OPC protocol.
The implementation of the OPC protocol will depends on the functonality of the client or the server.
Normally is the implementation of the OPC in the client less time consuming than the implementation
on the servers.
5.2 Why use OPC
1. Using standard network technology,
(a) proven,
(b) good performance,
(c) using existing networks,
(d) security and availability,
(e) cost,
(f) knowledge,
(g) wire and wireless.
2. open communication standard,
(a) independent,
(b) many suppliers,
(c) good performance.
5.3 OPC development
Two main reasons for developing your own client or server:
1. Control,
2. Performance.
Easy to develop a client, from a couple of days and up depending on the functionality of the client.
A server is much more work, almost a year (from 3 to 12 months) depending on the knowledge of the
specications. The servers should also be approved by the OPC foundation.
The type of knowledge (Krogh 2005):
46
CHAPTER 5. OPC SYSTEM 47
1. OPC specications,
2. Windows security,
3. OOP (Object Oriented Programming),
4. Developing tools,
5. COM/DCOM,
6. OPC toolkits.
5.4 OPC test
The OPC system should be tested before installing into a real plant. The best way of testing such
a system is to use any simulation modes of the server if the hardware is not available. Most OPC
servers have a simulation mode and the next chapter will show the usage of a freeware OPC server from
Matrikon
1
in simulation mode. This server has a limited number of variables but can be used for smaller
tests.
1
Matrikon: See web page www.matrikon.com.
Chapter 6
OPC exercise
Install the MatriconOPCSimulation.EXE on the computer (Windows version only) and start the OPC
server. Note that this installation will only install the Simulation server, not the DDE server. The DDE
server must also be installed if using data from the Microsoft Excel application. Figure 6.1 shows the
startup window of the OPC simulation server.
Click on the Alias Conguration line, and a new window is shown on the right side of the main
window. Right click on the line and select insert new alias item as shown in Figure 6.2.
Insert a new value, call the value Temp and select the value as a Triangle Waves and Int4 as shown
in Figure 6.3. An item called Temp is now dened in the OPC server.
Lets use the MatrikonOPCExplorer as an OPC client to display the item Temp from the OPC server.
The startup window of the OPC client and the Temp item dened in the OPC server is shown in Figure
6.4.
First right click on the server name to connect to the server. Select connection to the OPC simulation
server on local host. The client should now connect to the server and the icon in from of the server name
should change to show that the client is connected. See Figure 6.5 for the connect menu.
When the client is connected, a group must be added to be able to connect to the item8s) on the
server. Right click on the server name again, and select the Add Group option as shown in Figure 6.6.
Select the Add Group and call the new group for TempGroup as shown in Figure 6.7. Other para-
meters as Update Rate and % Deadband can be set on the group as well.
Select OK to save the group and right click on the group name to add items to the group. Select
Add Items in the menu, shown in Figure 6.8.
Select New Items from the menu, and a new window showing the items for this group will be present.
The window is shown in Figure 6.9.
Click on the Congured Alias folder name and the available tags will shown in the lower window.
Double click on the Temp tag and the tag name will show in the Tag ID line on top of the window. Use
the button with the arrow to transfer the tag name to the added tag window shown in Figure 6.10.
Then select File Validate Tags to validate the tag. Use File Update and return to save the
tag in the group and the application will return to the main window of the OPC client. The OPC client
will now display the value of the item as shown in Figure 6.11.
More practice:
1. Try to add more items to the group, dierent type of values and so one.
2. Try to use another client, try for example MATLab
R _
if you have the OPC toolbox.
48
CHAPTER 6. OPC EXERCISE 49
Figure 6.1: The startup window of the OPC simulation server.
Figure 6.2: Insert of new alias items in the OPC server.
CHAPTER 6. OPC EXERCISE 50
Figure 6.3: Insert an item called Temp being a Triangle Waves type as Int4.
Figure 6.4: Start of the OPC client on top of the OPC server.
CHAPTER 6. OPC EXERCISE 51
Figure 6.5: Right clikc on the server name to get the connect menu option.
Figure 6.6: Right click on a connected OPC server to get the Add Group menu option.
CHAPTER 6. OPC EXERCISE 52
Figure 6.7: Add a new group to the OPC server from the OPC client.
Figure 6.8: Right click on the group name to get a menu for adding item(s) to the group.
CHAPTER 6. OPC EXERCISE 53
Figure 6.9: The window for adding new items to a specic group on the OPC server.
Figure 6.10: The Temp item from the OPC server is added to our group.
CHAPTER 6. OPC EXERCISE 54
Figure 6.11: The OPC client is displaying the value of the Temp item from the OPC server.
Part III
Real-Time System
55
Chapter 7
Introduction
A real-time system means a computer based system where one or more of the applications must be able
to synchronise" with a physical process. Real-time means that the computer system is monitoring
the states of the physical process and must respond to changes of one or more of these states within
a maximum time. A real-time system can then be used for monitoring of dierent parameters in the
physical process for presentation, warnings, alarm situations and for control. The control is possible by
regulation of the input variables to the physical process. A typical system is shown in Figure 20.14 where
a real-time system is inuenced by sensors in the physical process and the real-time system is using the
information from the sensors for control of input variables to the physical process.
NOTE: A realtime system does not mean as fast a possible, a good design of a realtime system just
means as fast as necessary to satisfy the requirements of the system.
7.1 Synchronization
The applications of the real-time system must run together with the physical process, so the real-time
system must be able to manage simultaneity. The solution is often to run several applications on the
computer system or on dierent computers in a distributed system. These solutions requires some sort
of synchronization between the applications and between the applications and the physical process.
Probably the most used way of synchronization between the applications and the physical process is the
usage of sensors, while the synchronization between the applications is the usage of global variables or
messages. This is shown in Figure 7.2.
When several applications are running simultaneous on a computer system, there must also be
some control of the usage of the resources in the computer system. Resources can be both hardware and
software, like I/O units
1
, globale variables in the software, the CPU, memory, disk etc. One example is
the printer device, only one can use the printer at a time. If several users are printing at the same time
will the text be mixed. This is also shown in Figure 7.2.
1
I/O units: Input and/or output units.
Figure 7.1: A real-time system for monitoring and control of a physical process.
56
CHAPTER 7. INTRODUCTION 57
Figure 7.2: Syncronization in a real-time system.
Figure 7.3: The sequential dataow of an oce software application.
7.2 Programming
Programming of oce systems will be a sequential dataow, meaning that the application will be started,
will execute a set of operations in a xed sequence. The system will use the data already available in
the system or ask the user, and will end when the operations are nished. This is shown in Figure 7.3.
A real-time system will be running in parallel with a physical process and must be running as long as
the physical process is running. The process be running all the time and the real-time system must be
a 24 7 system
2
. A real-time system is executing the operations depending on events from the physical
process and must then react within a specic time on these events. Data will not be available in the
system, the real-time system must read the data from the physical process when needed. This is shown
in Figure 7.4.
7.3 Embedded system
An embedded system is a special-purpose system in which the computer is completely encapsulated by
the device it controls, a system dedicated to a specic purpose. An embedded system does not have any
real-time requirements, but often the concept embedded system is used for real-time system and
vice versa. The dierences are:
2
24 7: 24 hours 7 days: The system must be running all the time, can not be stopped or restarted!
CHAPTER 7. INTRODUCTION 58
Figure 7.4: The event based operations of a real time software application.
Figure 7.5: The main parts of a Central Processing Unit (CPU), the important part for a real time
system is the registers and the PC.
1. An embedded system is often a system that should work without interaction with a user, will often
be a blackbox.
2. A real-time system is running in parallel with a physical process, requirements for simultaneousness
and the reaction of external events have to be within a specic time.
Remark 1 A real-time system will very often be an embedded system while an embedded system do not
need to be a real-time system.
7.4 CPU and microcontrollers
The CPU, Central Processing Unit, is the main unit in a computer system. The CPU consists of a set
of registers, a program counter (PC) having the address for the next CPU instruction, an Arithmetic
Logical Unit (ALU) for the operations and logic for the control of the memory address. In real-time
systems very often a lot of I/O is needed so a microcontroller can be used instead. The main contents of
a CPU is shown in Figure 7.5. A microcontroller is a CPU with dierent type of I/O units integrated.
The CPU or the microcontroller will normally be the most important resource in a real-time system.
7.5 Example
In Figure 7.6 a real-time system is used for monitoring and controlling the liquid level in a buer tank.
The buer tank should never be empty nor full, and the liquid level is monitored using a high level switch
at 19.5 liter and a low level switch at 0.5 liter. The output from the switches is low when not covered by
liquid and high when covered by liquid, with a delay of 0.08 to 0.1 second. The pump is controlled by a
CHAPTER 7. INTRODUCTION 59
Figure 7.6: Buer tank with low and high control of the level, a pump and a real-time system controlling
the pump based on the low and high level sensors.
simple ON/OFF signal and the pumping rate is 1 liter pr. second. The delay for the ON/OFF control
of the pump is maximum 0.2 second.
What will be the real-time requirements for this system? Will it be a real time system?
The real-time requirements are to start or stop the pump based on the sensor signals without the
buer tank being overlled or empty. The pump capacity is 1 liter pr. second giving 0.5 seconds to start
or stop the pump from the low or high level sensors. Let bus use the high level sensors as this has the
largest delay time. The real-time requirements for the system will be:
t
1T:tj
= t
ouoIloblt
f.
t
un
f.
t
stnso:
f.
= 0, 5 0, 2 0, 1 = 0, 2 s
meaning that the system will work if the RT system can deliver the pump sisgnal within 0, 2 second.
If the system can not react within 0, 2 second, it will not be a real-time system.
Chapter 8
Specications
8.1 Descriptions
Denition 2 A real-time system is a system that react at the right time, in a predictable way, on an
unexpected external event
1
.
Denition 3 A real-time system is a system where the calculations not only depend on a logical correct
execution, but also of the time when the result is available
2
.
The requirements for a system to be a real-time system:
1. deadline; the real-time system must detect any changes of the states for the physical process within
a specic time. The system is dened as failing if these deadlines are not kept,
2. simultaneousness; the system must be able to handle several changes in the physical process at
the same time and all of these changes must be detected within the deadline. This requirement
demands a parallelism in the system, the solution may be using multitasking and/or a distributed
system,
3. resources; the real-time system will have a limited set of resources like CPU, memory, disc, I/O
devices so these resources must be shared, synchronization variables in the software and so on.
Important that the requirement for simultaneousness is kept even if there is a limited set of resource.
A real-time system can be dened as a hard or soft system having the following requirements:
1. a hard real-time system;
(a) delay will NEVER be accepted,
(b) information is useless if given at a wrong time,
(c) the system will fail if the deadline is not kept,
(d) shall not miss a deadline
(e) Examples:
i. Automatic Braking System (ABS) in a car,
ii. control systems for ghters.
2. a soft real-time system;
(a) delay can be accepted, but the cost may be higher,
(b) lower performance can be accepted if delayed,
(c) should not miss a deadline,
(d) examples:
i. IP-phone.
ii. lm/video from internet.
1
Martin Timmerman, Belgium.
2
Hans Christian Lnstad, Data Respons ASA, Norway.
60
CHAPTER 8. SPECIFICATIONS 61
8.2 Properties
1. Event: changes in the physical process involving an event into the real-time system. An event is
something that comes from outside the system or part of the system. An event can be a message
sent to one or more of the tasks in the real-time system,
2. Task / Process / Thread: to be able to have simultaneousness in the real-time system should the
problem be divided into smaller parts depending on the deadline for these parts. Each of these
parts can then be designed as a software module like a task, process or thread,
3. Multitasking: to be able to have simultaneousness in the real-time system as all the tasks, processes,
or threads must run in parallel,
4. Scheduler: The service in the real-time system responsible for the multitasking, from a set of rules
to select the next task to run
3
,
5. Preemption: Important events in the physical process may require a fast decision, faster then
the scheduler service can give. A real-time system must therefore be able to be preemptive; to
interrupt the running task and start a new task. But note that the deadline must still be held for
the interrupted task,
6. Interrupt: An event in the real-time system that is used for temporary stopping the running task
and start a new task,
7. Interrupt latency: The time from an interrupt occurs until the new task is running,
8. Priority: The tasks will be assigned dierent priority levels when analysing and designing the
system, depending on the importance of the task. The scheduler will always try to run the tasks
with the highest priorities rst,
9. Watchdog: A hardware device monitoring the whole system, and restarts the system if the software
fails.
3
Both Windows and Linux are multitasking operating systems containing a scheduler.
Chapter 9
System architecture
The architecture of a real-time system depends on the size of the physical process that shall be monitored
and/or controlled, the real-time requirements of the physical process and security. The size of the physical
process will dene the number of input signals and output signals (sensors and actuators), and the real-
time requirements to these signals. Security means among others the requirement for Mean Time To
Failure (MTTF), Mean Time To Repair (MTTR), and what if the deadline can not be hold.
The architecture can be a single system, duplication of a single system for redundancy, or distributed
systems. Distributed systems can be distribution of the microcontrollers (CPU cards), distribution of
I/O units, and distribution of resources among others. Network and data buses are important elements
in a distributed system as these elements will be the communication elements between the distributed
units. A real-time system is often a distributed system in one way or another, especially distributed I/O.
A single system and a distributed system is shown in Figure 9.1.
In distributed system is the real-time functions/services in the operating system important since tasks
and processes can not use common areas in the memory for communication or synchronization.
9.1 Scheduling
The job for the real-time system is the total mission for the device and its associated hardware, consisting
of multiple tasks (multitask). A job is divided into a set of tasks. In a real-time system an application is
divived into a set of independent modules being the tasks. The modules are divided in such a way that
each module should full the its own time critical events.
2 types of schedulers:
1. Long term scheduler (batch job scheduler, not useful for real-time systems),
2. short term scheduler (CPU scheduler, useful for real-time systems).
The CPU is an important resource in a real-time system due to the requirements of dead lines and
simultaneousness. The simultaneousness is solved by leting the dierent software contexts be active in
Figure 9.1: A single system and a distributed system.
62
CHAPTER 9. SYSTEM ARCHITECTURE 63
Figure 9.2: The principles of multitasking, macro view and CPU view.
short time lags.
Figure 9.2 shows the principle of multitasking, the macro view to the left and the CPU view to the
left. The macro view shows that the contexts are running simultaneousness for instance the last 10
minutes, while the CPU view shows the details for instance the last second.
The task, a software module, will be controlled by the scheduler. Without a scheduler only one task
can run, called a single task system. A single task system is an application that run in an endless loop.
Any real-time part of the application must be solved by interrupt service routines (ISR). This solution
is used in small systems, uncomplex systems, or if the real-time behavior is not critical.
The CPU view in Figure 9.2 shows how the scheduler is working. Each real-time task is running for
short period of time before the next real-time task is started. The scheduler can be either non-preemptive
or preemptive:
1. non-preemptive scheduler: a task will use the CPU until it release the CPU to the next task.
(a) Every context is performing a number of operations before it is calling a function in the
scheduler. Every context must then consists of a main function, a state machine and require
good knowledge when developing the software. The reason for this is that every context has
the responsibility to run for only a short time and then call the scheduler function. See Figure
9.3 for this type of a scheduler.
2. preemptive scheduler: the CPU can be taken away from the task during execution.
(a) The normal way of making the scheduler is to use a scheduler task controlled by an interrupt.
This interrupt is controlled by a hardware signal from a timer, and the timer is then giving
a hardware interrupt to activate the scheduler a xed time intervals. The interrupt with the
highest priority is used so the scheduler task can abort all other contexts. Both Windows and
Linux is using 20 ms, but can be adjusted according to the maximum number of tasks that
can run simultaneous. Figure 9.4 shows the principle of scheduler controlled by an interrupt.
At xed time intervals will the timer issue an interrupt to the CPU and the current context
will be aborted and the scheduler context will be started. The scheduler will save the context
of the aborted context (CPU registers, PC and stack), check which task to be started and
restore the context of this task. It is important that the running time of the scheduler should
be as short as possible as this is overhead in the real-time system.
(b) An extended way of the interrupt controlled scheduler is using a shorter time intervals only
for checking if the running context should be aborted. Normally the running context will run
just as long as in number 2a, but the system can react faster for instance on external events.
This solution gives a faster reaction to events, but also more overhead since the scheduler will
be active more often.
Remember that the interrupt and/or the scheduler can abort a task at any time, at any instruction
in the software task. This is important to have in mind when analysing and designed the system. Any
task can be aborted at any time, and any of the other tasks can be the next active task.
CHAPTER 9. SYSTEM ARCHITECTURE 64
Figure 9.3: A non preemptive scheduler; each task will nish the operations before the scheduler gets
the control.
Figure 9.4: A scheduler controlled by interrupt.
CHAPTER 9. SYSTEM ARCHITECTURE 65
Figure 9.5: The states and relationships of a task or process.
A dispatcher is a software module (often a part of the scheduler) that gives control of the CPU to
the selected task by:
1. switching context (load the CPU registers, set the PC and the stack segment),
2. start the task (jump to the code of the Program Counter (PC)).
9.2 States
Using a scheduler gives the tasks dierent states. The scheduler will decide which task will be running
next and must then know which of the tasks that are ready run. Remember that a task can also be
waiting for instance an external event and it is wast of time starting a task that is just waiting. Therefore
will the context of the tasks have a state, being:
1. O-ine; The task is not loaded into memory, no context exist of this task yet,
2. Waiting; The task is waiting for a resource. The scheduler will check the waiting condition every
time it is active to check if the state can be changed from waiting to ready,
3. Ready; the task is ready to be running,
4. Running; the task that are running, only one task can be running at a time (on a single core CPU).
The relation between the dierent states is shown in Figure 9.5. The scheduler has tables or lists
containg information of every task in the system, including the state.
9.3 Strategies
When the scheduler is activated, the running context is aborted and saved. The scheduler will then
check the waiting list (table/queue) and check if any of the waiting tasks can be moved to the ready list
or table. Then the scheduler will check the ready list to check which one is running next. The scheduler
must use a set of criteria to select the next task for running. The reason for using a set of criteria:
1. CPU utilization: minimize overhead, by keeping the CPU as busy as possible,
2. Throughput: number of processes or tasks completed per unit time,
3. Turnaround time: from creation time to termination time. Time=starteTime+Waiting in the ready
queue+Executing on the CPU+Doing I/O
4. Response time: from creation time to rst output,
5. Fairness: each process or task should have a fair share of the CPU.
Some of these criteria can be:
CHAPTER 9. SYSTEM ARCHITECTURE 66
Figure 9.6: The round robin scheduling algorithm.
1. cooperation; is used when the scheduler is not controlled by the interrupt, but with cooperation
between the tasks. Used for non-preemptive schedulers,
2. rst-come, rst-served (FCFS); each task runs until it blocks or terminates. Used for non-preemptive
schedulers,
3. shortest job rst (SJF); the task in the ready queue with the shortest running CPU time rst.
Used for non-preemptive schedulers,
4. round robin; using the sequence of the ready list (queue) to decide which task to start next. Used
for preemptive schedulers (preemptive version of (FCFS), works if all the tasks have the same
priority. This is shown in Figure 9.6.
5. priority; the task with the highest priority is stated rst. May give problems for tasks with low
priority, but can be solved by rising the priority while waiting in the ready list (queue).
Priority inversion; If a task with a high priority must wait for a resource used by a task with a lower
priority, will the priority of the task be increased unntil the resource is released. This is the responsibility
of the scheduler and used for preemptive schedulers.
Other criteria can also be used, the criteria can be important when selecting a RTOS. In some RTOS
it is also possible to select a set of criteria to be used when building or even starting the system.
Chapter 10
Synchronization
Normally a task has to be synchronised with another task in some way, and a real-time system has to
oer a set of services for synchronization like semaphores, events, interprocess communication and shared
memory.
10.1 Semaphore
A semaphore is the simplest form of syncronization and has two basic functions. These functions are:
1. request; the scheduler will move the task to wait queue if the semaphore is already occupied by
another task. The function name can be wait()
1
,
2. release; releases a semaphore, the scheduler will move the blocked tasks from the wait queue to
the ready queue. and may free a blocked thread
2
.
A semaphore can be a binary variable being only 0 or 1 or a unsigned integer (char/short/integer)
being 0 or a positive number. The real-time system must then support a set of semaphore services, often
only 2 operations are needed. One operation for increasing the value, and one operation for decreasing
the value. The operation for decreasing the value will only do it if the value is 1 or greater. If the value
is 0 the task will wait until the value becomes 1 before it can continue. The name of these operations
can be wait() and release(), but these names may vary from real-time system to real-time system.
Release() will always increment the value, while wait() will decrement the value, but only if the value
is 1 or higher. The wait() function will contain at least both an operation for checking the value and
decrement the value if 1 or greater. The semaphore service will guarantee that the scheduler will not
abort the operation these operations, being a kind of critical regions.
The create() function makes an semaphore, stored in the operation system, that can be available for
all tasks in the system.
The semaphore services must be part of the real-time system so that every task has the access to
these services. One of the tasks must create the semaphore and then this task and other tasks can use
this semaphore. Normally all resources will be protected by semaphores used for controlling the access
of the resources. A task that is doing a wait() for a resource will be put in the wait queue until the
resource is ready for usage.
The coding of the semaphore:
Semaphore sem1 = 1 ;
release()
{
sem1 = 1 ;
}
wait()
1
In C# (.NET): use WaitOne() for requesting a semaphore.
2
In C# (.NET): use Release() for releasing a semaphore.
67
CHAPTER 10. SYNCHRONIZATION 68
Figure 10.1: The RTS system with the software process, software threads and the displays.
{
if (sem1 == 1)
{
sem1 = 0 ;
return OK ;
}
else
{
// Move the task to the wait queue
// Do NOT use a wait loop !!!!
}
}
Example 4 A real-time system has 5 displays, located at 5 dierent locations, but shall be updated from
the same parameter in the physical process. The displays are updated from a text buer of 8 characters
showing the parameter as a text string. The system contains a process for reading the values from the
process, calculating the parameter, and converting the parameter to the text string in the text buer. The
process will also start 5 threads for displaying the value of the text string on the 5 displays. See Figure
10.1.
Lets assume the current value of the text buer is 0.9876 and the new calculated value from the
process is 1.0123. The operation for converting the oating value (of binary form) to the text string
requires some CPU time updating one and one character in the text buer.
The simultaneous requirement of the system gives that the process only have time to update the
rst character in the buer before the rst thread is started. The value in the text buer will now be
1.9876 and the rst thread will now write this value on the display and end the execution. The process
will then be started again and update the next character in the text buer, now containing 1.0876. Then
the next thread will be started and will display the current value of the text buer on the display. The
process will have time to update only one character in the text buer between each display process. The
simultaneous requirement is done by letting the process and the threads running for a short period of
time as shown in Figure 10.2.
This kind of multitasking is using the scheduler in the real-time operating system to switch between
the context for the process and the threads, the problem is the synchronization of the text buer. The
results of displays will be as shown in Table 10.1, showing the values on the dierent displays and the
correct value.
How will you solve this problem using syncronization? You can use semaphore or events, events will
be the best solution for this type of syncronization.
10.2 Events
An event is used if several tasks is waiting for a semaphore and all of these tasks is going to do an
operation where a semaphore is not needed.
One example can be reading of a value that have been calculated or estimated by a task. The task
writing the new value will use a semaphore to update the value, but after the value has been updated
CHAPTER 10. SYNCHRONIZATION 69
Figure 10.2: The execution of the process and the threads simultaneous.
Table 10.1: The displayed values on the dierent values when the parameter value is updated from 0.9876
to 1.0123
Display Display value Correct value
1 1.9876 1.0123
2 1.0876 1.0123
3 1.0176 1.0123
4 1.0126 1.0123
5 1.0123 1.0123
several tasks can read the value without the usage of a semaphore. All tasks wanting to read the value
will use a wait() function and be put in the wait queue of the scheduler. When the writing task has
changed the value and updated the synchronization, all waiting tasks will get an event can read the new
value. See Figure 10.3 where task #1 is updating the value using an event semaphore and task #2 to
task #n is only receiving an event when they can read the value. This means that the tasks are moved
from the wait queue to the ready queue of the scheduler.
Real-time systems supporting events normally have the operations set(), clear() and wait() for events.
Figure 10.3: Using syncronization events to read a updated value. Task#1 is updating the value, and
task #2 to task#n is only reading the value.
CHAPTER 10. SYNCHRONIZATION 70
10.3 Interprocess communication (IPC)
Semaphores and events are primitive operations and very dicult if communication of data or messages
is necessary between tasks. It is however important to notice that semaphores and events are used by
the more complex operations as well.
10.3.1 Pipes
A pipe is a FIFO
3
buer created in a common data area of the real-time system for used as read/write
buer for the task. A pipe can then be used as a simple communication channel between two tasks as
one task can write data to the buer and the other task can read data from the buer.
If a task is reading from a empty queue, the task will be moved to the wait queue until the other
task is writing data into the buer.
10.3.2 Message queue
Message queues are set of pipes used as post boxes, all created in the common data area of the real-time
system. These message queues can then be used for sending messages and data between the tasks in the
system. Synchronization can be done by sending commands to other tasks and waiting for the answers.
If the message queue is empty and trying to read, the task will be moved to the wait queue of the
scheduler. If the write queue of another task is full, the task is also put in the wait queue until the
message queue can be written to.
Synchronization can also be done by a master task sending commands to other tasks about what to
do. The other tasks will perform the command and then return to wait for the next command.
10.3.3 Shared memory
A common data area where the tasks can read and write random locations in the buer. Using pipes
and normal message queues, a FIFO buer is used, using shared memory a random location in the buer
can be read and write.
10.4 Communication Protocols
As time is an important parameter in RTOS, knowledge about the maximum time of receiving or trans-
mitting data is important to be able to calculate the dealines for the functions in the RTOS. Protocols
are used for receiving and transmitting data between two or more devices in a system, often distributed
systems, and protocols will be important for the RTOS. The real time requirement is normally higher on
the control level and eld level. Today will protocols that are using the Ethernet standard and following
the IEEE 802.3 be imprtant to be able to use standard infrastructur building blocks.
In distributed systems there will be two important properties regard communication and these prop-
erties are: a) direct cross trac between the nodes in the network, b) network topology. Direct cross
trac means that no master is necesarry, the messages should be sent as broadcast mode messages and
the information will be avialable for all the nodes in the network at the same time. This will give a
more eective communication, less trac load in the network, and more capaciyt available on the master.
Mixture of network topologies gives a system that easly can be upgraded and/or extended.
10.4.1 Token Ring
The token ring protocol is using a token which dene the owner of the communication media. The node
having the token will be the master of the network. The master node must send the token to the next
node when nishing the network operation.
10.4.2 CSMA/CD
The Carrier Sense Multiple Access / Collision Detect (CSMA/CD) protocol is dening that every node
can be the master whenever the communication media is needed. If several nodes are trying to use the
3
FIFO: First In First Out
CHAPTER 10. SYNCHRONIZATION 71
Figure 10.4: The basic cycle of the POWERLINK communication (www.wikipedia.org 2010).
media at the same time, a collision will occur, and both nodes will stay of the media for a random time
and retry to use the media.
This protocol can not be used in real-time systems as the protocol delay is not dened, this is not
a deterministic. Dierent industrial Ethernet protocols have been developed, some based on hardware
extensions and other on only software extensions. Hardware extensions mean that standard Ethernet
devices for CSMA/CD can not be used, like the Pronet IRT (Isochronous Real-Time). Other protocols
like Ethernet POWERLINK is using only software and is compatible with existing network devices like
routers, gateways, etc. These protocols are using a combination of polling and timeslices where only one
node can use the communication media in specic timeslices.
Ethernet POWERLINK
Ethernet POWERLINK is a deterministic real-time protocol for standard Ethernet communication sys-
tems. It is an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG)(www.wikipedia.org
2010). It is based on the standard Ethernet, but extent the protocol with a mixed Polling- and Times-
licing mechanism. This gives:
transfer of time-critical data in short isochronic cycles with congurable response time,
time-synchronisation of all nodes in the network with high precision ( js),
transmission of less timecritical data in a reserved asynchronous channel.
The communication media is controlled by a Managing Node (MN) and the overall cycle time depends
on the amount of isochronous data, asynchronous data, and the number of nodes to be polled during
each cycle.
The basic cycle consists of the following phases (See also Figure 10.4):
1. Start Phase: The MN is sending out a synchronization message to all nodes, called Start of Cycle
(SoC),
2. Isochronous Phase: The MN calls each node to transfer time-critical data (PollReq. and PollRes.
messages). Since all nodes are listening to all data during this phase, the communication system
provides a producer-consumer relationship. Time slots are used for each addressed node,
3. Asynchronous Phase: The MN grants the usage of the communication media to one particular
node for sending non real-time data (Soa and AsyncData messages). Standard IP-based protocols
and addressing can be used during this phase.
The quality of the real-time behaviour depends on the precision of the basic cycle time. The duration
of the isochronous and the asynchronous phase can be congured. POWERLINK is extended the Data
Link Layer with time slots for each node in the isochronous phase giving that only a single has access
to the communication media. The number of slaves (CN) can vary from cycle to cycle as all the CN
do not need to be polled in every cycle. CN with lower priority can share the same time slot giving a
multiplexing of the lower priority slaves.
Chapter 11
Resources
A system with requirements to simultaneousness must also have some requirements to access the resources
in the system. A resource can be hardware or software devices, a device that two or more tasks must be
able to share. A resource can not be used by several tasks at the same time, so the resource has to be
reserved by the task before being able to use the resource. The task must also release the resource when
nished using the resource.
The reservation and releasing of resources is often dicult, especially to debug when it is not working.
Normal problems are:
1. Forget to reserve a resource before using it,
2. Forget to release the resource after using it.
A good advice is to reserve only one resource at the time and release the resource before reserving
another resource. Deadlock is a situation that can arise if several tasks are trying to reserve a set of
resources.
11.1 Deadlock
Deadlock arise if task #1 has reserved resource A and need resource B to nished the operation, and
task #2 has reserved resource B and need resource A to nish the operation. These tasks will then never
be able to nished their operations.
// Task #1
resourceA.wait() ; // wait for resource A
resourceB.wait() ; // wait for resource B
// Task #2
resourceB.wait() ; // wait for resource B
resourceA.wait() ; // wait for resource A
Example 5 In an alarm system should all important alarms be saved on both disk and paper. Several
tasks are used with alarm checking and these tasks are then going to log the alarm on the disk as will
be a resource and write on the printer as will be another resource. The scheduler will run task #1 who
detects a alarm situation and is reserving the disk. Then the scheduler will run task#2 who also detects
a alarm situation, task#2 is written by another programmer, and is reserving the printer rst. Task#1
is then activating again writing the alarm message to the disk and tries to reserve the printer. As the
printer is already reserved, task#1 will be put onto the wait queue and task#2 will be started. Task#2
will write the message on the printer and try to reserve the disk for logging the alarm message. Task#2
will also be put in the wait queue because the disk resource is already reserved. And both tasks will be in
the wait queue for ever ...
Remark 6 Also note that the watchdog task will not detect this type of errors, and will not reset the
system!
72
CHAPTER 11. RESOURCES 73
Table 11.1: The task for the alarm system using wait() and release() functions.
Step Task#1 Task#2
N .. ..
N+1 wait(disk) wait(printer)
N+2 log alarm print alarm
N+3 wait(printer) wait(disk)
N+4 print alarm log alarm
N+5 release(printer) release(disk)
N+6 release(disk) release(printer)
N+7 .. ..
Figure 11.1: A scenario without deadlock is shown to the left, and a scenario with deadlock is shown to
the right. The tasks are reserving the disk and printer for logging the alarm states.
Table 11.1 shows the usage of wait() and release() functions for the example.
Two scenarios for the tasks in table 11.1 are shown in Figure 11.1. The scenario to the left is without
the deadlock situation where each task is reserving the disk and printer resources for logging the alarm
states. In this scenario will the tasks not request the disk and printer resources at exactly the same time
and a task is releasing the resources before the next task is requesting the same resource. The scenario
to the right shows a deadlock situation because both tasks are requesting the resources at the same time,
and will not release any resource before both resources are reserved.
How to avoid the deadlock situation? Reserve and release only one resource at the time!
Deadlock can also arise in systems using messages for syncronization between the tasks. The system
has 2 tasks that should be syncronizated by the r and j messages, shown with the code below. The
receive() method will wait unntil a message is received before returning the control back to the task.
// Task #1
receive() ; // Wait for next next message
send(x) ; // Send message x
// Task #2
receive() ; // Wait for the next message
send(y) ; // send the message y
This example will give a deadlock because both systems will wait for a new message. How can
deadlock be avoided in this example
1
?
1
HINT: Change the sequence of the rx/tx functions in the tasks, have dierent sequences in the two tasks.
CHAPTER 11. RESOURCES 74
Figure 11.2: Reservation of plane seats from dierent airports.
11.2 Critical region
Often is it necessary to protect a sequence of commands, in some sort of high level programming language,
to avoid that the scheduler is aborting this sequence. This sequence is called a critical region and will
never be aborted by the scheduler or another interrupt function, and must be a service of the real-time
operating system.
Example 7 An example of a critical region is a reservation system for an airplane. Two passengers are
checking in, one at Oslo/Gardermoen and one at Torp/Sandefjord, both at exactly the same time. Both
passengers are going to USA using the same plane from Schiphol/Amsterdam. See Figure 11.2 for the
airports and reservation system.
Both passengers wants seat 9A and without a critical region the sequence may be:
1. The system in Oslo/Gardermoen is calling the reservation system for checking of seat 9A,
2. The system in Torp/Sandefjord is calling the reservation system for checking of seat 9A,
3. The scheduler on the reservation system starts the task from Oslo/Gardermoen,
4. The reservation system checks that seat 9A is free,
5. The scheduler on the reservation system is switching to the task for Torp/Sandefjord,
6. The reservation system checks that seat 9A is free,
7. The scheduler on the reservation system is switching to the task for Oslo/Gardermoen,
8. The reservation system informs the system in Oslo/Gardermoen that seat 9A is taken,
9. The scheduler on the reservation system is switching to the task for Torp/Sandefjord,
10. The reservation system informs the system in Torp/Sandefjord that seat 9A is taken.
Using a critical region on the free/taken sequence, the reservation will now be:
1. The system in Oslo/Gardermoen is calling the reservation system for checking of seat 9A,
2. The system in Torp/Sandefjord is calling the reservation system for checking of seat 9A,
3. The scheduler on the reservation system starts the task from Oslo/Gardermoen,
4. The task for Oslo/Gardermoen is entering a critical region,
5. The reservation system checks that seat 9A is free,
6. The reservation system informs the system in Oslo/Gardermoen that seat 9A is taken,
CHAPTER 11. RESOURCES 75
7. The task for Oslo/Gardermoen is leaving the critical region,
8. The scheduler on the reservation system is switching to the task for Torp/Sandefjord,
9. The reservation system informs the system in Torp/Sandefjord that seat 9A is busy.
As the critical region will disable the scheduler and other interrupts, it must be as short as possible.
Some questions:
1. Never ever wait or try to reserve a resource inside a critical region. Why?
2. Can you implement a critical region using one semaphore?
Chapter 12
Software modules
When analyzing and designing the real-time system, the system will be divided into tasks. Designing
applications for a real-time systems are always challenging. One way to decrease the complexity of these
application is to use a task-oriented design and divide a project into dierent modules (or tasks). Each
module is then responsible for a specic sub parts of the application. With such a system it is important
to be able to specify that some modules (or tasks) are more important than others. The applications can
then be divided into sevaral tasks, and each task can be developed as a software application or program,
but need to intercommunicate with each other.
After compiling the software, an image of the systems memory will be made and this image must be
transferred to the targets disk, EPROM
1
or FLASH
2
memory depending on the type of real-time system.
Figure 12.1 shows one way of developing software for a real-time system, the software is developed using
a standard development system and must be uploaded to the real-time system for nale testing.
The software is uploaded to the real-time system and started using some type of boot loader
3
. The
software will be located in memory with a code area (read-only), data area (both read-only and read-
write), a working area
4
, and a stack area. The stack is located from the top of the memory and is used
for temporary storage. The contents of the memory is shown in Figure 12.2, loaded from a storage
device to the left and loaded from a EPROM/FLASH system to the right.
The running software is called a context being:
1. location of the application code, application data and stack in the address range,
2. the contents of all the CPU registers,
3. the content of the program counter (PC) (pointing to the next program instruction to be per-
formed).
Every program task is developed as a standalone application that will be implemented in the real-time
system as a task or a process.
12.1 Instruction time
An instruction cycle is the time period during which a computer processes a machine language instruction
from its memory or the sequence of actions that the central processing unit (CPU) performs to execute
1
EPROM: Erasable Programmable read Only Memory; Can also be ROM, PROM and EEPROM (Electrical EPROM).
2
FLASH: Similar to EEPROM, but erasing can only be done in blocks or the entire chip.
3
Boot-loader: A software device copying the software from the storage device into the working memory of the system.
4
The working area is often called the heap.
Figure 12.1: Developing software for real-time systems.
76
CHAPTER 12. SOFTWARE MODULES 77
Figure 12.2: The context of a running software task.
each machine code instruction in a program.
The name fetch-and-execute cycle is commonly used for the instruction cycle. The instruction cycle
will vary from instruction to instruction and a cpu normally has large set of instruction. The operation
for an instruction cycle will be:
1. Fetching the instruction; read the contents of the memorylocation of the program counter (PC),
2. Decode the instruction; the instruction register will hold the instruction,
3. Execute the instruction; a control unit is instructed by the instruction register how this instruction
shall be executed, often using the arithmetic logic unit (ALU),
4. Store the results; any results will be stored in memory.
In some system it can be necessary to disassamble the application code to get a list of the CPU
instructions. The sequence of the CPU instructions can be used to calculate the exact execution time of
the application or part of the application.
An example of the instruction set for a CPU is shown in Figure 12.3. This Figure shows a few
instructions for the 80386 CPU giving the instruction codes and number of clock cycles.
12.2 Software application
A running application is a context and can be either a process, a task or a thread.
12.2.1 Process
Processes are used for program tasks in bigger real-time systems and require more memory and hardware
devices. The most important hardware unit is the MMU
5
, a unit converting the addresses from a virtual
(logical) memory area to the physical memory area for the system. Normally real-time systems using
32 and 64 bits microprocessors (CPUs) are using MMU, and software applications are implemented as
5
MMU: Memory Management Unit.
CHAPTER 12. SOFTWARE MODULES 78
Figure 12.3: The rst page of the instruction for 80386 CPU, the father of the CPU used in Windows
computer today (Int 1988).
CHAPTER 12. SOFTWARE MODULES 79
Figure 12.4: The memory layout and the PID list in a real-time operating system.
processes. The reason for the MMU is a more eective utilization of the memory and ensure that no
processes can access memory area outside its own memory area. The advantage is when a software
application contains an error; the application will not be able to inuence other applications in the
system. The disadvantage is that a software application will not be able to access data from other
applications. How shall these processes be able to exchange data or be synchronized? The solution is
to use operating system services for data exchange and synchronization.
The operating system must maintain the MMU table and the process ID (PID) lists, shown in Figure
12.4. The operating system also contains services for synchronizations between the processes, and in
modern operating system threads can also be used.
12.2.2 Thread
A thread, also called a light weighted process, must be started by a process, will have a new code
memory area (with its own program counter), but will have access to the same data memory area as
the process. A process can start a number of threads, all of them will have separate code memory area
but all will share the same data memory area.
Figure 12.5 shows a process containing a code memory area and a data memory area. The process has
also started a number of threads, having separate code memory area but all of them having a common
data memory area.
When the process and the threads have the same data area will synchronization be important. Nor-
mally will a thread be started to perform a small part of the program task and often will this small part
of the program task take some time.
Threads can be very useful for I/O functions, because:
1. error checks can be done in the thread and will not inuence the execution of the process,
2. scaling is easy, just start a new thread if more I/O is necessary,
3. waiting for an event, only the thread will wait for the event, the rest of the application will run as
normal.
The drawback when using threads is synchronization, in a real-time system the process and the
threads must be executed simultaneous and Example 4 shows the problem of synchronization.
12.2.3 Task
Smaller real-time systems using 4, 8, or 16 bits microprocessors (CPU) do not have a MMU and there
will not be any mapping between virtual (logical) and physical memory. Using no MMU, there is not
CHAPTER 12. SOFTWARE MODULES 80
Figure 12.5: A single process with a number of threads.
possible to have the boundary checks between the program tasks, and these program tasks are called
tasks. A task can always access the whole memory area as shown in Figure 12.6, showing the task ID
list and the memory where each task can access the whole memory area.
The advantages of using tasks (and no MMU):
1. faster real-time operating system as no conversion between logical and physical address is necessary,
2. all tasks can access the whole memory area, easy and fast to exhange data between the tasks,
3. easy to syncronize the tasks..
The disadvantages of using tasks (and no MMU):
1. Dicult to trace a software problem as a task can destroy for the other tasks (which one is rst?),
2. All access to common data has to be syncronized.
From time to time there is a mixing of the term process and task in the litterature, the basis is that
a process is a software application on a mainframe computer while a task is a software application on a
microcomputer. A common description of a job in a real-time system is a task, so this can be the reason
for this mixture. It is however important to understand the technical dierences of a process and a task.
A process:
1. is using a logical or virual memory area (needs a MMU),
2. must use operating system services for communication and synchronization with other processes,
3. with a software error will not destroy the data for other processes in the system.
A task:
1. is using a physical memory area,
2. can communicating directly with other tasks using memory bues, but should of course use OS
functions for syncronization,
3. can destroy the data for other tasks in the system if an error.
CHAPTER 12. SOFTWARE MODULES 81
Figure 12.6: A real-time system with 4 tasks with common data area and no check of memory addresses.
Figure 12.7: A Lab VIEW application with two independent loops running on two diferent cores on the
multicore processor (www.ni.com: jan-10).
12.3 Core and Multicore
A CPU can contain one core or several cores. A core is a single CPU with the registers, program counter
(PC), memory and I/O controller, and the aritmetic logic unit. The core is the CPU resourse and a
multicore means that the system has several CPU resourses. The cores will use the same memory and
I/O devices and range, meaning that the task/processes can share resource the same way as a single core
system. A multicore system will have several run queues meaning that the scheduler has to decide the
next running task for all the cores.
Figure 12.7 shows how two independ loops can utilize the two cores in a multicore processor.
12.4 Input monitoring
A real-time system must be preemptive, meaning that a software context can be aborted by another
software context. This is solved by interrupt, either hardware interrupt or software interrupt. Hardware
interrupt is also very useful when monitoring an event in the physical process. This monitoring can be
done in two dierent ways:
1. interrupt,
CHAPTER 12. SOFTWARE MODULES 82
Figure 12.8: Interrupt control in the upper part and polling in the lower part. Notice the vast of CPU
time in the lower part.
Figure 12.9: Using an interrupt task for reading the keyboard characters.
2. polling.
Using interrupt, a real-time system is utilizing the preemptive support and switch the software context
to read the event and then switch back to the interrupted context. See the top section of Figure 12.8.
Using polling, the real-time system must have a process/task checking the event all the time wasting
a lot of unnecessary CPU power. See the bottom section of Figure 12.8.
Interrupt should be used for digital signales (state changes), while polling should be used for analog
values. The analog values vary all the time so a new value should be read at every polling time. The
polling time can be the sampling rate of the monitoring system.
12.4.1 Example
A keyboard is a good example of device using interrupt. When a key is pressed, the current software
context is interrupted, the software context for the keyboard is started, the key pressed is read and saved
in memory, and the interrupted software context is started. See Figure 12.9.
Using polling, the keyboard must be polled fast enough not to loose any keys, meaning that the
keyboard has to polled at least every 500 ms all the time.
12.4.2 Priority
A microprocessor often has a lot of interrupts, both hardware and software. These interrupts have
dierent priorities and an interrupted task with lower priority will be aborted by an interrupted task
with higher priority. The right priority of the interrupts must then be assigned during the analysis and
design of the real-time system.
CHAPTER 12. SOFTWARE MODULES 83
Figure 12.10: Priority inversion where a low priority task TaskL has exclusive access to a resource and
prevent the high priority TaskH from running (www.iar.com: Nov-08).
Figure 12.11: Priority inversion with inheritance where a low priority task TaskL has exclusive access to
a resource, but inherit the high priority from the TaskH (www.iar.com: Nov-08).
Priority inversion
A system with multiple tasks running in parallel may in some cases give a low priority task that can cause
a high priority task to be halted. In systems where dierent tasks need exclusive access to a resource,
priority inversion can occur. A task with a high priority will be given a low priority for a specic period
of time. A low priority task gets to run and starts to use a resource with exclusive access. If a higher
priority task needs the same resource it halts until the low priority task has released the resource. This
would eectively give the high priority task a priority level beneath the low priority task.
This can be a problem for the task with the high priority because the time requirements is harder
than the tasks with lower priority. This is shown in Figure 12.10 where the high priority task, TaskH, is
given a priority lower than the low priority task, TaskL, in the period marked with priority inversion.
There are dierent solutions to this problem, the most common ones are:
1. Priority inheritance; let the low priority task inherit the priority from the tasks waiting for the
resource,
2. Disabling task switching to protect critical sections; turn o the switching when using the resources.
The priority inheritance will let a low priority task inherit the highest priority from any tasks waiting
for the resource. This is shown in Figure 12.11 where the low priority task TaskL inherit the priority
from the high priority task TaskH. The low priority task TaskL will nished using the resource and let
the high priority task TaskH nish before the medium priority task TaskM is started.
12.5 Watchdog
A monitoring function of the RTOS system. A hardware device, often a counter that will reset the CPU
after a specic amount of time. A task with the lowest priority is used for reseting the hardware device
and the system have to be designed that the watchdog task will not reset the system when everything is
CHAPTER 12. SOFTWARE MODULES 84
Figure 12.12: The watchdog is a hardware device, a counter, using the hardware reset signal for reset of
the CPU.
Figure 12.13: The watchdog is counting the number of pulses and a limit of pulses when the output will
be activated. The reset signal will force the counter to start count from 0.
working OK. If a software error occurs (or system error), the watchdog will not be reset and the system
will be restarted. it is however important to understand that the watchdog will only work if your software
is designed wrong with endless loops or crashing, but will not with problems like deadlock. Why?
Figure 12.12 shows the principle for the watchdog. The watchdog is a hardware device, a counter,
connected to an oscillator or a crystal. The counter will count a number of pulses and the output will be
high when this number of pulses is reach. The counter output is connected to the hardware reset input
of the CPU resetting the CPU, meaning restarting the whole real-time system.
The reset task in the system has the responsibility to reset the counter, to start from 0, so it will
never reset the CPU. Resetting the CPU is an error condition and should normally not happen. Figure
12.13 shows the functions of the watchdog. The watchdog will count the number of pulses and the output
will be activated when a specic number pulses are counted. The reset task should reset the watchdog
counter before reaching this specic number of pulses, forcing the counter to start from 0.
Chapter 13
Design
A real-time system consists of a set of time requirements and these time requirements are the key focus for
the design. The functional requirements, often controlled by the time requirements, require a distribution
of the functions. This distribution is often solved by dividing into sub tasks, and in real-time system will
the distribution be structured with one time requirement, or time requirements belonging to each other,
in the same sub tasks.
The sub tasks in a real-time system will often run concurrently giving a stronger restraints of the
design (Ripps 1989). The rules for real-time tasking (Ripps 1989):
1. Use structured design (make the application more easy to design, implement, test, and maintain);
(a) The main activies should be assigned to at least one separate taks,
(b) Closely related functions should be kept in the same task,
(c) Functions that deal with the same inputs, data, and/or ouput should be kept in the same
task,
(d) Activity with I/O devices should be kept in a separate task, gives better error handling and
scaling.
2. Try to keep the CPU(s) always busy with productive work;
(a) Use the operatings system wait function in stead of a wait loop.
3. Functions with dierent attributes must be assign to dierent tasks;
(a) Functions with dierent requirements should be assigned to separate tasks,
(b) Functions with signicantly dierent levels of urgency must be assigned to separate tasks,
(c) Functions proceeded at dierent time scales be assigned to separate tasks.
85
Chapter 14
Programming
14.1 Introduction
When programming of a real-time system it is important to have focus on the dead lines, simultaneous-
ness, and the usage of resources. The software system has to be as fast as possible, never waste any CPU
time, and minimize the usage of the resources.
If the program is going to wait for an event, never ever use a waiting loop, the solution is to
tell the scheduler that the task has nished and will wait for a periode of time. The normal operation
is sleep(time) where the waiting time will be time in milliseconds. Whenever a task is performing this
sleep() function, the scheduler is told to place the task in the waiting queue and start the next task in
the ready queue. Everytime the scheduler is activated, it will check the tasks in the waiting queue and
the time will be decremented until 0. At 0 will the task be moved to the ready queue again. A trick is
often to use the sleep(0) which will start the next task in the ready queue and place the aborted task at
the back of the ready queue.
The program ow in a real-time application will be depended on external events as distincted from
oce applications using a sequential programow controlled by user and data. A real-time applica-
tion responds to external events from sensors among others, while oce applications resonds to user
interactions and available data.
Some advices when designing real-time systems:
1. Always think synchronization when using resources,
2. Always think synchronization when changing globale data,
3. Always use a sleep() function if the application has to wait (or can wait),
4. State machines are useful in real-time systems.
14.2 Memory allocation
The memory allocation is a service of the operation system and is also treated as a resource in a real-time
system. The normal operation is to allocate the memory when needed and release the memory after the
usage. This should be avoided in a real-time system because the memory will be fragmented and the
software will not be able to allocate more memory. This is special for 24r7 systems as the memory
allocation service (table) will never be reset. The solution for the real-time system can be to allocate all
the memory at startup time, and just reuse this memory while running.
The memory allocation will be used more in an Object Oriented based system as every new() method
is allocating memory for the class. Some programming languages like C++ are using delete() method
to release memory. For programming languages like C# and Java is a garbage collector running in
the background releasing memory for classes with no references. The garbage collector will use CPU
time making problems with the deadlines, and the memory will be fragmented giving problems for the
allocation function.
86
CHAPTER 14. PROGRAMMING 87
14.3 Posix.4
Portable Operating System Interface (Posix) is a standard from IEEE and the meaning with Posix is
to have portable source code for the applications. The Posix standard denes a set of system functions
that can be used to make a more portable source code, and Posix.4 denes a set of real-time functions.
Using a Posix.4 compatible real-time system can be useful when developing real-time systems.
14.4 C# example
An application consisting of the source code listed below shows how C# can be used for threads, wait
and simultaneousness. The main program starts 3 threads and all 4 tasks (process and threads) are
running simultaneously, waiting some while and printing a message on the screen for each loop. The
application consists of 2 classes, one Thread class (ThreadClass) and the main program (ThreadEks).
Both the main program and the threads are using the sleep() function to let the scheduler abort
the context, move the context to the waiting queue and start the next context in the run queue. The
application is using dierent waiting times for the main program and the threads and every context is
writing the status on the screen. The output of the application is shown in Figure 14.1, showing the
state of the main program and the threads.
The application starts in the ThreadEks class, in the main function. The function of the main
application is:
1. Write the textstring Start of main program on the screen,
2. Create the rst thread class, set the name Thread#1 and the wait time to 200 ms,
3. Create the second thread class, set the name Thread#2 and the wait time to 300 ms,
4. Create the third thread class, set the name Thread#3 and the wait time to 400 ms,
5. Then entering a loop counting to 30. In each loop, print a dot on the screen and wait 100 ms,
6. End the main program by writing End of main program .
What will be the duration of the main program?
Duration=__________
The functions of the thread applications will be:
1. Set the name and wait time of the thread,
2. Create the threading function of the thread class (the run function),
3. Start the threading function (the run function),
4. Write the textstring Starting Thread#n ,
5. Loop for 5 times waiting the wait time for the thread and then write the threadname and the loop
counter on the screen,
6. End the thread by writing Ending Thread#n .
What will be the duration of each thread?
Thread#1=________; Thread#2=__________; Thread#3=___________
CHAPTER 14. PROGRAMMING 88
using System;
using System.Threading;
//Example in using sleep() and Threads i C#
namespace ThreadEks
{
/// <summary>
/// Threadklasse
/// </summary>
class ThreadClass
{
int loop_cnt ;
int loop_delay ;
Thread cThread ;
public ThreadClass(string name, int delay)
{
loop_cnt = 0 ;
loop_delay = delay ;
cThread = new Thread(new ThreadStart(this.run)) ;
cThread.Name = name ;
cThread.Start() ;
}
// The main function in the ThreadClass
void run()
{
Console.WriteLine(" Starting " + cThread.Name) ;
do
{
loop_cnt++ ;
Thread.Sleep(loop_delay) ;
Console.WriteLine(" " + cThread.Name + ": Loop=" + loop_cnt) ;
} while (loop_cnt < 5) ;
// Ending of the thread
Console.WriteLine(" Ending " + cThread.Name) ;
}
}
// The application
class ThreadEks
{
/// <summary>
/// Start of the main program
/// </summary>
static void Main(string[] args)
{
Console.WriteLine(" Start of main program ") ;
// Making 3 threads ..
ThreadClass ct1 = new ThreadClass("Thread#1", 200) ;
ThreadClass ct2 = new ThreadClass("Thread#2", 300) ;
ThreadClass ct3 = new ThreadClass("Thread#3", 400) ;
// Wait while the threads are running ...
for (int cnt = 0; cnt < 30; cnt++)
{
Console.Write(".") ;
Thread.Sleep(100) ;
}
// End of main program
CHAPTER 14. PROGRAMMING 89
Figure 14.1: The output from the C# application consisting of the main program and 3 threads.
Figure 14.2: The execution of the main process and the 3 threads.
Console.WriteLine(" End of main program ") ;
}
}
}
//
/////////////////////////// EOC ////////////////////////////////////
The timing of the main process and the threads shown in Figure 14.1 is shown in Figure 14.2.
Chapter 15
Operating systems
Most of you knows Windows as an operating system and some of you also Linux. These operating
systems are designed as General Purpose Operating Systems (GPOS) meaning that they skal be a
general operating system, but they are not very suitable as a Real-Tome Operating System (RTOS).
Todays versions of Windows and Linux is probable more a Network Operating System (NOS) than a
GPOS.
Time is an important parameter in a RTOS system and the system needs to fulll the requirements of
deadlines. This means that every function in the OS must have an indication of the maximum time used
before the control is given back to the RTOS application. Without this information it is not possible to
use the OS, or the functions, in a RTOS. A RTOS is sometimes called real-time multitasking kernel. A
RTOS is a software component that ensures an ecient processing of time critical events by dividing the
application into multiple independent modules caledd tasks.
There are two main technologies of real-time operating systems. Shared Memory RTOS where the
developer is responsible for protecting memory from mutual access by using specic RTOS tools like
semaphores, and Direct-Message Passing RTOS where data is encapsulated in messages used for both
inter-process communication and synchronization.
A lot of operating systems are designed as RTOS, both in complexity and price. The most popular
RTOS
1
is shown in Figure 15.1, showing VxWorks, XP Embedded, Windows CE, DSP/BIOS, Red Hat
Linux and QNX on the top.
A lot of dierent factors is important when selecting a RTOS, especially factors like price of the
development system, price for each license and the development tools. Technical factors will be the
number of interrupt levels, the number of priority levels and the strategy of the scheduler.
15.1 RTOS requirement
The requirements for a RTOS:
1. multitasking,
2. preemtible,
3. the number of priority levels should be at least 16, preferably 64 or 256. The number of levels
depends on the number of tasks in the system,
4. predictable synchronization mechanisms,
5. interrupt latency (depending on hardware),
6. several interrupt levels (depending on hardware).
Some of the important specications for a some of OS systems are listed in Table 15.1.
1
From july-2006.
90
CHAPTER 15. OPERATING SYSTEMS 91
Figure 15.1: The RTOS popular in july-06, from EmbeddedSystems Europa June/July 2006, page 18.
Table 15.1: Some of the important OS specications for real-time systems.
Requirements WxWorks XP Emb. Win CE Linux QNX
Multitasking Yes Yes Yes Yes Yes
Preemtible Yes Yes Yes Yes Yes
Priority levels 256 16 or 32? 256 32 32
CHAPTER 15. OPERATING SYSTEMS 92
Figure 15.2: A simple OS with a set of drivers and basic OS functions. The applications will use the
drivers and the basic OS functions.
15.2 Driver
The driver is the software module used as communication between the RTOS and the hardware. Any
operating system is using a Hardware Abstraction Layer (HAL) for communication with a virtuel driver.
The driver will be a software module for communication between the virtual driver and the hardware.
The modules will be a layer structure like:
Operating system Services OS software
Hardware Abstraction Layer OS software
Driver Software
Hardware
The driver can be a problem for Windows and Linux system as the driver often is developed for a
GPOS and not RTOS. In RTOS the responstime is important and the driver must be developed for a
fast respons from the hardware to the RTOS. The driver must also be robust (reliable) as a real-time
system very often will be a 24r7 system.
Figure 15.2 shows a simple OS or RTOS with drivers and a set of basic OS functions. The appliocation
will use only the drivers and the OS functions to communicate with the hardware, the application should
NOT contain any specic coding for the hardware.
15.3 Windows / Linux
Windows and Linux are not RTOS even if the Linux kernel 2.6 and later has much better real-time
performance then before. One solution that can be used if you still want to use Windows or Linux is
the solution shown in Figure 15.3. The solution is to use a real-time kernel (a simple real-time operating
system) where the standard operating system like Windows and Linux is running as process on the lowest
priority. The realtime kernel will be the scheduler for the real-time processes and the operating system
process. The scheduler in the standard operating system will be the scheduler for the standard operating
system processes. This system will then have both real-time support and the standard operating system
support, the standard operating system must even be installed from the standard CD (or DVD).
Another solution (www.ardence.com) is a real-time solution for Windows where the real-time system
is running in parallel with Windows, see Figure 15.4. The communication between Windows and the
real-time sub system is solved using IPC, a driver and a DLL
2
. This way can real-time processes and
Windows processes communicate and be synchronised.
Figure 15.5 shows a solution for Linux or Symbian OS where the OS is part of the User Level Code,
only a RT Executive is part of the Privileged Level Code.
Why do we want to use Windows or Linux?
1. Developer knowledge,
2. Developments tools,
2
DLL: Dynamic Link Library.
CHAPTER 15. OPERATING SYSTEMS 93
Figure 15.3: One solution to have real-time support together with a standard OS like Windows and
Linux.
Figure 15.4: A real-time solution for Windows (www.ardence.com).
Figure 15.5: A real-time solution for Linux or Symbian with a high degree of system security as most of
the code is in User Level. This solution is for dierent types of phones.
CHAPTER 15. OPERATING SYSTEMS 94
Figure 15.6: A DOS terminal showing some commands and the results from this commands. The
terminal has only a text based user interface, it is not a graphical user interface.
3. Debugging.
15.3.1 Windows history
Windows started with the Microsoft DOS (MS-DOS) in 1982, the standard operating system for the rst
PCs. The MS-DOS was a single user, single task, operating system with a DOS terminal for typing
the commands. Figure 15.6 shows such a text based user interface, it is also known as a DOS window.
The history of the Windows operatingsystem is:
Windows Major changes
DOS Text based user interface and only one DOS window.
95 Greatly simplied interface (graphical); much more friendly to the average user
98 Improved multimedia capabilities and built-in Internet functionality
2000 Industrial-strength Windows NT code base, but in a much more polished package
XP Unied the Win9x and WinNT/2K code bases; allowed businesses to standardize on one OS
Vista Added to much security, avoid a lot of WinXP application to run.
7 Merging the good solutions from XP and Vista
15.3.2 Windows CE or Linux
When selecting a RTOS should you select Windows CE, Linux, or a specic RTOS?
1. Windows XP Embedded is an embedded OS, not a RTOS,
2. Lisence fee for WCE, Linux is free,
3. WCE has a subset of the WIndows API,
4. WCE from v3.0 (2000) is a RTOS, from v4.2 it is a stable RTOS, from v6.0 (2006) Visual Studio
can be used as a tool,
5. The startup cost is higher for Linux than Windows CE,
6. Windows CE has only one distribution, Linux has several,
CHAPTER 15. OPERATING SYSTEMS 95
7. Windows CE has a footprint from 300 KB, but between 8-12 MB is usual with display support
etc.. With explorer will the footprint be about 20 MB. A normal Linux footprint is about 1-4 MB,
minimum from 400 KB. A normal Linux system is about 8-16 MB,
8. Documentation of Linux is much better than Windows CE.
15.3.3 Windows XP Embedded
Windows XP embedded is an OS build of modules and component.
15.4 QNX
QNX Neutrino is another popular RTOS that is both time tested and eld proven
3
. The QNX Neutrino
RTOS is said to be a microkernel operating system, meaning that every driver, application, protocol stack,
and le system runs outside the kernel, in the safety of memory-protected user space. Any component
can fail, and will be automatically restarted without aecting other components or the kernel.
Technology overview (www.qnx.com: dec-08):
1. High availability solution;
(a) Process watchdog for application monitoring and recovery, self healing inter-process commu-
nications, and restartable device drivers and operating system services,
(b) Virtually any component, even a low-level driver, can fail without damaging the kernel or
other components,
(c) Process model ensures that if a component fails, QNX Neutrino can cleanly terminate it and
reclaim any resources it was using no need to reboot.
2. Essential networking technologies including IPv4, IPv6, IPSec, FTP, HTTP, SSH, and Telnet,
3. Photon microGUI a full featured embedded graphical user interface,
4. Integrated le systems for ash devices and rotating media,
5. System visibility and debugging support,
6. Supported by QNX Momentics, the Eclipse based integrated development environment,
7. Full memory protection where the OS can immediately identify the component responsible, at the
exact instruction,
8. Instrumented kernel and visualization tools that trace system events including interrupts, thread
state changes, synchronization, CPU utilization and more,
9. Scalability; Scale large or small using only the desired components,
10. Take advantage of built-in multiprocessing capabilities harness the power of multi-core processors,
11. Simplify the design of fault-tolerant clusters with built-in transparent distributed processing,
12. Portability;
(a) Maximize application portability with extensive support for the POSIX standard, which allows
quick migration from Linux, Unix, and other open source programs,
(b) Target the best hardware platform for an embedded system and get up and running quickly
with runtime support and BSPs for popular chipsets, including MIPS, PowerPC, SH-4, ARM,
StrongArm, XScale, and x86
13. Field-tested binaries drivers, applications, custom OS services, and so on can be reused across
entire product lines.
15.5 VxWorks
3
QNX has delivered RTOS since the beginning of the 1980, see www.qnx.com for more information.
Chapter 16
RT system
16.1 Benets of any RTOS
The benet of using a RTOS (Schultz 1999):
1. It is more easy to get all the details when breaking a set of jobs into tasks,
2. Multitasking can make it easy to respond to real-time demands,
3. Intertask communication provides a solid method of controlling execution order and timing,
4. Tasks can be independent,
5. Tasks will be small and easier to manage,
6. Commercial or build-your-own?
(a) commercial RTOS will be fully debugged and tested,
(b) commercial RTOS will have a license fee,
(c) commercial RTOS will be more general (ecient and memory size),
16.2 Cost of RTOS
The cost of using a RTOS (Schultz 1999):
1. The hardware must have a specic timer and interrupt structure,
2. Multitasking costs time,
3. Time to learn programming the system.
16.3 Contents of a RTOS
1. Scheduler
2. Semaphore
96
Part IV
DAQ systems
97
Chapter 17
Sensor overview
A sensor device is often dened as a device that receives a signal or stimulus and responds with an
electrical signal (Kester 1999), (Fraden 2004). A sensor is a device that senses either a absolute value or
change in a physical quantity. The concepts sensor and transducer are often mixed, but a transducer is
dened as a converter of one type of energy into another type of energy. A transducer converting a type
of energy into an electrical signal can be a sensor. Sensor signals are often incompatible with input of
measurement systems so the sensor signal must be conditioned. An overview of a measurement system
is shown in Figure 17.1.
A sensor is used for measuring various physical properties such as temperature, force, pressure, ow,
position, light intensity, etc. These properties act as the stimulus to the sensor and the sensor output is
a measurement of this property. The stimulus is the quantity, property, or condition that is sensed and
converted to an electrical signal.
Sensors can not operate by themselves, they must be part of a larger system consisting of a mea-
surement system and a computer system as shown in Figure ??. The important parts in this course
will be the sensor device consisting of the sensing device, signal condition and converters, and the mea-
surement system. The sensing device will be the device that receives a signal or stimulus and responds
with some type of electrical signal. These electrical signals will always be an analog signal, meaning a
time continuous signal where some time varying feature of the signal is a representation of some other
time varying quantity (www.wikipedia.org 2006). The primary disadvantage of an analog signal is the
inuence of noise, meaning random variation, and another disadvantage is that a computer can only use
digital signals. Digital signals are digital representations of discrete-time signals, which are often derived
from analog signals (www.wikipedia.org 2006). An important part of the sensor device will then be the
signal condition and converters. As shown in Figure ?? the signal condition and the converters can be
part of both the sensor devices and the measurement systems meaning that not all sensors devices can
be connected to all measurement systems.
The sensor device normally consists of the sensing device, a signal condition device, and some sort of
transducer or transmitter as shown in Figure 17.2. This sensor device can be connected to a measurement
system using a standard interface or a standard sensor device bus. A sensor device bus is an interface
where several sensors can be connected to the same wires often by a sort of addressing for the sensor
devices.
Sensors devices can also have proprietor interfaces meaning that the sensor devices can only be
connected to a measurement system designed for these sensors. This is shown in Figure 17.3.
Figure 17.1: The main modules in a measurement and control system. The sensors and the actuators
are an important part of such a system.
98
CHAPTER 17. SENSOR OVERVIEW 99
Figure 17.2: A sensor device with a standard interface.
Figure 17.3: A sensor device with a proprietor interface.
A transducer converts a physical phenomenon to an electrical signal so the sensing device and the
signal condition device in Figures 17.2 and 17.3 will be a transducer. A transmitter provides a specic
output signal, very often a standard interface signal like 4 20 mA
1
, HART
2
, RS-485
3
or other types of
bus signals (analog or digital signal). A sensor transmitter will always have a transducer included as the
transmitter is only used for transmitting the electrical signal to another device.
17.1 Sensor device types
Sensor devices can range from very simple devices to complex devices.
17.1.1 Passive or Active
A sensor device can be either passive or active. In passive sensors the property of the device change
without any additional energy consumption from the electrical circuits connected to the sensor, while the
active sensors requires an operating signal provided by an excitation circuit (Fraden 2004). The passive
sensors directly generates an electric signal in response to an external stimulus, energy is only needed
to amplify the analog signal. Examples of passive sensors can be a termocouple where a voltage will
be generated depending of the temperature, a photodiode where the voltage over the diode will depend
on the light, and piezoelectric sensor where the voltage will depend on the pressure on the piezoelectic
element. Active sensors need external power for operation where the sensor device is modifying a signal
depending on the stimulus. Examples of active sensors can be the temperature dependent resistors
(RTD) like PT-100, where the current (excitation signal) will generate an output voltage depending
on the restanse, light dependend resistors (LDR) where the current (excitation signal) will generate an
output voltage depending on the restanse, or an ultrasonic sensor where a sound signal will generate an
output voltage or current.
The dierence is important if the sensor is going to be located in a hazardous area, i.e. areas with
potentially explosive atmospheres. Active sensors must then be equipped with an extra device to limit
the amount of electrical energy used in the hazardous area to avoid explosions under any possible fault
condition. The regulation of the equipment in hazardous areas is in Europe controlled by the ATEX
4
directives and guidelines.
Active sensors require external power for the operation, called an excitation signal (Fraden 2004).
This signal is used by the sensor device to produce the output signal of the sensor device. An example
is shown in Figure 17.4 showing a current loop sensor using the same wire for both power and signal.
1
4 20 mA: Current loop, an analog signal.
2
HART: A current loop with digital information added.
3
RS-485: A digital signal.
4
ATEX: ATmosphres Explosibles
CHAPTER 17. SENSOR OVERVIEW 100
Figure 17.4: An active sensor shown as a current loop sensor (4 20 mA).
The output from the sensor will be a current and using a resistor the current can be converted to a
voltage. Normally a 4 20 mA standard is used meaning that the electronics inside the sensor is using
4 mA and then the electronics will adjusting the output current between 0 and 16 mA depending on the
stimulus. This 420 mA standard is used a lot in the industry due to better noise immune, using current
for signal cummunication, and using the same two wire for both power and signal.
17.1.2 Absolute or Relative
A sensor device can also be an absolute or relative sensing device. An absolute sensor device detects a
stimulus in reference to an absolute physical scale that is independent on the measurement conditions,
whereas a relative sensor detects a stimulus relative to another stimulus (Fraden 2004).
17.1.3 Point or Continuous Measurement
A sensor device can be used for point measurement or continuous measurement. In point measurement
the output from the sensor device will indicate the presence or not of some properties at a specied
point. Continuous measurement will give an continuous output signal, normally proportional with the
property of the measurand.
17.1.4 Contact or non-contact
A sensor device can be a contact device or a non-contact device. A contact device must be in contact
with the medium to measure the property, while the non-contact device will not be in physical contact
for measuring the property. A non-contact device will then depend on transfer of energy to the sensor for
measuring the property. This energy transfer can either be radiation, or a reected signal transmitted
from the sensor device.
17.1.5 Invasive or Intrusive
A sensor device can also be invasive or intrusive indicating the inuence of the measurand. Invasive
often means that the sensor will be in contact with the measurand, while intrusive means that the sensor
device will be disturbing the measurand as well.
Expressions like online, o-ine, and inline can also be used, where online means invasive, inline means
intrusive, and o-ine means non-contact. Often online is mixed with automatic meaning measuring the
measurand in real-time.
This mean that a sensor device can be either passive or active, point or continuous, and contact or
non-contact. A contact sensor device can be either invasive or intrusive.
17.2 Sensor device properties
A sensor device is converting the value of the measurand to a output signal of the sensor device as shown
in Figure 17.5.
There is a lot of important properties for the sensor device in describing this conversion, and these
properties can be divided into 3 dierent groups:
CHAPTER 17. SENSOR OVERVIEW 101
Figure 17.5: The conversion of the measurand to the output of the sensor device.
1. concepts for the conversion,
2. concepts for the operating conditions,
3. concepts for the accuracy.
17.2.1 Concepts for Conversion
The concepts for the conversion is:
1. Lower range value (11\ ); the lowest value of the measurand the sensor device can convert,
2. Upper range value (l1\ ); the highest value of the measurand the sensor device can convert,
3. Range; the interval between lower range value and higher range value (ra:qc = /iq/ |on),
4. Lower range limit; the lowest value can be adjusted,
5. Upper range limit; the highest value that can be adjusted,
6. Overrange; the sensor device can be destroyed if the measurand is above this value,
7. Overrange limit; the sensor device will be destroyed if the measurand is above this value.
8. Unidirectional; zero is either the lower and upper range value (0

C to + 125

C) ,
9. Bidirectional; zero is between the lower and upper range value (25

C to + 125

C) ,
10. Suppressed-zero; zero is below the lower range value (10

C to + 125

C) ,
11. Elevated-zero; zero is above the upper range value (125

C to 25

C)
Sensor output:
1. Lower output value (1O\ ); the output value of the sensor device for the lower range value,
2. Upper output value (lO\ ); the output value of the sensor device for the upper range value.
Knowing the measurand, the output value of the sensor will be:
ontjnt_a|nc = 1O\ +
_
measurand11\
l1\ 11\
_
(lO\ 1O\ )
What will the measurand be if we know the output value of sensor device?
Example 8 You have a temperature sensor with a range of [20

C, 80

C] and an output range of


4 20 mA. Show that the output current is 12 mA when the temperature is 30

C.
CHAPTER 17. SENSOR OVERVIEW 102
Figure 17.6: The operating conditions of a sensor device (from (Olsen 2005)).
Figure 17.7: Random errors when sensing the measurand.
17.2.2 Concepts for Operating Conditions
The concepts of operating condition (environment) (Olsen 2005):
1. reference operating conditions; a small working area where the accuracy of the sensor device is
valid,
2. normal operating conditions; a working area where the sensor device is constructed for operation,
3. operative limits; limits for the working area where the sensor device can operate without being
destroyed,
4. transportation and storage conditions; the area of the measurand where the sensor device can be
held without being destroyed or any need for recalibration.
Figure 17.6 shows these operating conditions of a sensor device.
17.2.3 Concepts for Accuracy
The output value from the sensor device will vary from reading to reading, the value will NOT be a
constant value even if the measurand seems to be constant. This is due a set of small random errors like
noise etc. See Figure 17.7.
The variation of the output value will however be like a normal distribution where the mean should
be the most correct value for the measurand and the standard deviation often depends on the accuracy
of the sensor. The formula for the normal distribution is:
) (r) =
1
o
_
2
c
(o,)
2
2o
2
where o is the standard deviation for the population and j is the mean of the population. Often the
normal distribution is denoted by:
r ~ A
_
j, o
2
_
CHAPTER 17. SENSOR OVERVIEW 103
Figure 17.8: A set of normal distribution curves with dierent j and o.
where A() is the normal distribution, j is the mean, and o
2
is the variance. As a rule of thumb for the
normal distribution:
Values Range
68% o
95% 2o
99, 7% 3o
meaning that 68% of all measured values should be within the standard deviation of the mean value.
A set of normal distribution curves are shown in Figure 17.8.
Accuracy properties:
1. Sensitivity; the smallest change in the measurand that can be detected by the device
_
4ou|u|
4Inu|
_
,
the absolute change, often in voltage
2. Resolution; the smallest portion of the measurand that can be observed by the device, the relative
change and often depending on the range, often in the number of bits,
3. Repeatability; the closeness of successive measurements carried out under the same conditions,
4. Reproducibility; the closeness of successive measurement carried out with a stated change in con-
ditions,
5. Accuracy; the closeness of the measurement and the measurand,
6. Absolute accuracy; the closeness of the measurement and the measurand,
7. Relative accuracy; the closeness of the measurement and a reference value,
8. Error; the deviation between the measurement and measurand,
9. Random error; the mean of a large number of measurements of the same measurand inuenced by
random (see Figure 17.10):
crror (rando:_crror = :ca:nrcd_a|nc acraqc_o)_rcadi:q:)
10. Systematic error; the mean of a large number of measurement of the same measurand inuenced
by systematic error deviates from the measurand ( see Figure 17.10):
(:j:tc:atic_crror = acraqc_o)_rcadi:q: :ca:nrand)
CHAPTER 17. SENSOR OVERVIEW 104
Figure 17.9: Sensor hysteresis showing the dierences when the measurand is increasing or decreasing.
Figure 17.10: System error and random error for a sensor device.
11. Uncertainty; An estimate of a possible error in the measurement,
12. Time drift; changes in accuracy over a long period of time (1 year?).
13. Hysteresis error; the dierence in the sensor device output for a specic measurand depending if
the previous measurand was lower or upper. See Figure 17.9.
14. Nonlinearity error; the maximum deviation of a straight line.
Accuracy and precision
The accuracy and precision is important when doing measurements and depends on the entire DAQ
system. Accuracy denes how close the measurement is to the real value and precision is how repeated
measurements under unchanged conditions show the same results. Some examples of accuracy and
precision is shown in Figure 17.11, the relationship between accuracy and precision and the time domain
of the signal is shown in Figure 17.12, and the relationship between accuracy and precision is shown in
Figure 17.13. The precision indicates the repeatability for the device.
Accuracy is one of the most important considerations for measurement and often the accuracy is
given as percent of the full range output, stated as FRO (Full Range Output)
5
. When using the sensor
device as an input device to a model, the repeatability will be more important than the accuracy, why?
The model must be trained with proper data and it does not matter if the sensor device signals are not
that accurate as long as the sensor device signals are repeatable.
Accuracy, resolution and repeatitability
Figure 17.14 shows dierent combination of resolution, accuracy and repeatability. The repeatablity
can be high even if the resolution or the accuracy is high. Using a model, the model can be trained to
deal with lower resolution and/or accuracy, but not low repeatability.
error =measured_value - measurand_value
Example 9 You buy a pressure sensors with the measuring range [1 bar, 10 bar] with an accuracy of
0.5% FRO
6
. You measurement range is [1 bar, 5 bar], what will be the minimum and maximum accuracy
5
FRO: Also called Full Scale (FS) or Full Scale Output (FSO).
6
FRO: Full Range Output, also given as Full Output (FO) or Full Scale (FS).
CHAPTER 17. SENSOR OVERVIEW 105
Figure 17.11: Some examples of accuracy and precision (Mat 1999).
Figure 17.12: The relationship between precision, accuracy, and the signal in the time domain (Olsson
& Piani 1998).
Figure 17.13: The relartionship between accuracy and precision (www.wikipedia.org 2010).
CHAPTER 17. SENSOR OVERVIEW 106
Figure 17.14: The relationship between the resolution, accuracy, and repeatability. The repeatability
can be high even if the resolution or the accuracy is high (www.keithley.com: oct-2010).
for your measurement range? The minimum accuracy will be 1% at 5 bar and 5% at 1 bar as the
accuracy is
100,005
100
= 50 mbar. The solution will be to always select a measurement range of the sensor
device as close as possible to your measurement range!
External errors:
1. interference errors like vibrations, noise, and the power supply unit (PSU),
2. wrong use of the sensor device like wrong calibration, using it outside the normal operation condi-
tions, or wrong interface connections.
17.3 Sensor output signals
The output from a sensor device can be either an analog signal or a digital signal. This will not be the
range of the sensor device, but the interface signal for connection to a DAQ system. The output signal
variables can be:
1. current signal; normally a 4 20 mA, often used in noisy environment,
2. voltage signal; most commonly used interface signal, three important properties:
(a) amplitude,
(b) frequency,
(c) duration.
3. bandwidth; the range of the frequencies present in the measured signal, all sensors have a low and
a high limit for measurement.
17.4 Dynamic measurement
When the measurand is unchanging in time and the measurement system is showing the same value
in response to the measurand, the measurement process is said to be static. When the measurand is
changing in time and the measurement system is not showing instantaneous response, the measurement
process is said to be dynamic. When the measurement system is dynamic there is usually an error
introduced into the measurement and actions must be taken to minimize this error.
Dynamic responses of a measurement system can usually be placed into one of tree categories:
1. zero order; response instantly to measurands, given that no measurement systems are truly zero
order,
2. rst order; see Response A in Figure 17.15,
CHAPTER 17. SENSOR OVERVIEW 107
Figure 17.15: The dynamic respons of a system with a pulse on the input. Response A is a rst order
respons and Response B is a second order response.
Figure 17.16: The dynamic response of a sensor, T
0
is the dead time, T
J
is the delay time, T

is the peak
time, '

is the peak value, and T


s
is the settling time (Olsson & Piani 1998).
3. second order see Response B in Figure 17.15.
The sensor will also have some dynamics, but this information is normally not included in the
datasheets of the sensors. The dynamic response of a sensor can be tested with a step response. Figure
17.16 shows the parameter that describes the respons of a sensor, and these parameters should be as
small as possible (Olsson & Piani 1998).
The dynamic parameters are (Olsson & Piani 1998):
dead time (T
0
); the time between the rst change of the physical value and the rst change in the
output signal of the sensing device,
rise time; the time it takes to pass from 10% to 90% of the steady state response,
delay time (T
J
); the time to reach 50% of the steady state response,
peak time (T

); the time to reach the rst peak,


settling time (T
s
); the time when the sensor step response is within a certain percentage (e.g. 5%)
of the steady-state value.
17.5 MEMS
Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors, actuators,
and electronics on a common silicon substrate. The electronics will be fabricated as integrated circuits
on the silicon sustrate, while the micromechanical components will be fabricated etching away parts of
the silicon substrate or add new structural layers to form the mechanical and electromechanical devices.
Chapter 18
Signal condition systems
This section will focus on measurement systems with electrical signals. The sensing device has an
electrical output meaning that the electrical property is caused by a change of the measurand. Often
will the measurand make a change in a resistance, capacitance, or a voltage of the sensing device, but in
some cases can the output be a measurand dependent on current, frequency, or electric charge as well.
Figure 18.1 shows the signal condition part of a sensor device.
The focus will be on sensing device with an electrical output having the advantages over mechanical
devices:
1. ease of transmitting the measurement signal from the sensing device to the measurement system,
2. ease of amplifying, ltering, and otherwise modifying the signal,
3. ease of converting the signal to a digital signal for monitoring and control,
4. ease of logging the signal.
Electrical sensing devices are normally called sensors, but can also be called transducers, gages, cells,
pickups and transmitters. A measurement system can be as shown in the Figures ??, 17.2, or 17.3. The
sensing devices will be the focus in later chapters, in this chapter we will focus on the signal conditioning
device. The must common functions of the signal conditioning device or stage are:
1. amplication,
2. attenuation,
3. ltering,
4. dierentiation,
5. integration,
6. linearization,
7. combining the measured signal with a reference signal,
8. converting the signal to an output signal (often voltage or current).
Figure 18.1: The signal condition part of a sensor device.
108
CHAPTER 18. SIGNAL CONDITION SYSTEMS 109
Figure 18.2: An amplier with the input voltage \
I
and the output voltage \
o
(Wheeler & Ganji 2004).
9. electrical isolation; high-voltage transients, safety or grounding.
10. multiplexing; mixing several signals,
11. excitation source.
The signal conditioning function will be one or several combination of the listed functions and will
be a very important part of the sensor device.
18.1 Amplication
Sensing devices often produce low voltage signals ( jV or mV) and since these signals are dicult to
transmit over wires, the signal should be amplied. See Figure 18.2 for an overview of an amplier used
for changing the low input voltage, the changing is due to the gain property of the amplier.
An amplier to be used in a measurement system will often be an instrumentation amplier. An
instrumentation amplier is a type of dierential amplier that has been outtted with input buers,
which eliminate the need for input impedance matching and thus make the amplier particularly suitable
for use in measurement and test equipment. Additional characteristics include very low DC oset, low
drift, low noise, very high open-loop gain, very high common-mode rejection ratio, and very high input
impedances. Instrumentation ampliers are used where great accuracy and stability of the circuit both
short- and long-term are required (www.wikipedia.org 2006).
Let the low voltage be \
I
(input voltage) and the output voltage of the amplier be \
o
, the gain G
will be:
G =
\
o
\
I
The gain can be any value, but often within [1, 1000], or a decrease within [0, 1]. Gain is normally given
in a logarithmic scale, expressed in decibels (d1) as:
G
J1
= 20 log
10
G = 20 log
10
\
o
\
I
Example 10 The output voltage of a sensing device is maximum 5 mV, you need a maximum voltage
as input to your measurement system of 5 V. Show that the gain of the instrumentation amplier should
be 60 dB.
The amplier used for changing the sensing signal, normally amplifying the signal, can also change
the signal in other ways as for example frequency distortion and/or phase distortion.
The design of the amplier will depend on the sensor resistor, the sensor resitor can be placed between
the load and circuit ground or between the supply and the load. The low side current sensing is with
the sensor resistor between the load and circuit ground, the high side current sensing is with the sensor
resistor between the supply and the load. Figure 18.3 shows the high side sensing circuit to the left and
the low side sensing circuit to the right. The value of the resitor should be as low as possible to keep
power dissipation in check, but high enough to generate a voltage detectable by the amplier.
18.1.1 Bandwidth distortions
Ampliers have dierent gains for dierent frequencies, therefore the bandwidth of the amplier is
important. The gain will always be reduced for (low and) high frequencies of a signal amplier. The
CHAPTER 18. SIGNAL CONDITION SYSTEMS 110
Figure 18.3: The high side sensing circuit to the left and the low side sensing circuit to the right
(Electronic Engineering Times, Oct-09).
Figure 18.4: The bandwidth for an amplier with the 3d1 cuto frequencies ()
lou
and )
|I|
) (Wheeler
& Ganji 2004).
bandwidth is a parameter describing the frequency area. The bandwidth is dened as the frequencies
between the low and high frequencies where the gain is reduced by 3 d1, see Figure 18.4. The bandwidth
is measured in Hertz ( Hz) .
The reduction of the gain is:
Gai: =
1
_
2
- 0, 707 - 3d1
meaning that the output voltage is - 71% of the maximum output voltage.
Due to the bandwidth will the amplication of the dierent frequencies be dierent, given the fre-
quency distortion. Figure 18.5 shows the frequency distortion of a square wave input signal. The square
wave signal contains a wide range of harmonics given that an amplier has dierent gain factors for each
frequency due to the bandwidth, which gives the frequency distortion.
Figure 18.5: Frequency distortion of a square-wave input signal (Wheeler & Ganji 2004).
CHAPTER 18. SIGNAL CONDITION SYSTEMS 111
Figure 18.6: The phase angle of a sine wave signal (Wheeler & Ganji 2004).
Figure 18.7: A phase angle response diagram of an amplier (Wheeler & Ganji 2004).
Normally will the gain be relatively constant over the bandwidth, but the phase angle, another
property of the output signal, can change signicantly. If the input signal of the amplier is expressed
as:
\
I
(t) = \
nI
sin2)t
where ) is the frequency and \
nI
is the maximum amplitude of the input sine wave, the output signal
will be:
\
o
(t) = G\
nI
sin(2)t +c)
where c is called the phase angle, see Figure 18.6. Normally will the phase shift not be a problem, but
for complicated periodic waveforms it may result in a problem called phase distribution.
The phase angle response diagram is shown in Figure 18.7, showing the phase angle versus the
logarithm of the frequency. The combination of the bandwidth diagram in Figure 18.4 and the phase-
angle response diagram are called the Bode diagrams of a dynamic system.
It can be shown that a linear variation of the phase angle with the frequency, the output signal will
only be delayed or advanced in time. A non-linear phase angle will disturb the output signal giving
phase distortion. This is shown in Figure 18.8 where (a) is the input signal; (/) is the output signal with
a linear phase angle; and (c) is the output signal with a non-linear phase angle.
18.1.2 Common-mode rejection ratio (CMRR)
The common-mode rejection ratio (CMRR) of a dierential amplier (or other device) measures the
tendency of the device to reject input signals common to both input leads (www.wikipedia.org 2006).
As shown in Figure 18.2 an instrumentation amplier will have two inputs and the connection of these
inputs can be dierential-mode or common mode. In dierential mode will the input voltage be applied
to the two input terminals as shown in Figure 18.9(a) . When the same input voltage is applied to the
two input terminals, relative to ground, the input is common-mode voltage. An ideal instrumentation
amplier will not produce an output signal in common-mode voltage, but real ampliers will.
The common-mode rejection ratio is dened as:
CHAPTER 18. SIGNAL CONDITION SYSTEMS 112
Figure 18.8: A signal with a linear and non-linear phase angle variation with frequency: (a) input signal;
(/) linear variation of phase angle; (c) non-linear variation of phase angle.
Figure 18.9: Common-mode rejection ratio, the dieretial mode connection to the left and the common
mode connection to the right.
CHAPTER 18. SIGNAL CONDITION SYSTEMS 113
Figure 18.10: A simple model of the sensing device (a) and the amplier (/) (Wheeler & Ganji 2004).
Figure 18.11: A connection of the sensing device model (a), the amplier model (/), and the load (c)
(Wheeler & Ganji 2004).
C'11 = 20 log
10
G
JI}}
G
cn
expressed in decibels. G
JI}}
is the gain in dierential-mode and G
cn
is the gain in common mode. Since
the signals of interest often result in dierential mode and noise signals often result in common-mode, a
high value of C'11 is desirable (often more then 100d1).
18.1.3 Input and output loading
Connecting sensing devices, signal condition devices, and measurement systems can give problems for
input and output loading for these devices and systems. See Figure 18.1. To analyses these problems
simple models of these devices can be used. The sensing device can be modeled as a voltage generator \
s
in series with a resistor 1
s
. This model is shown in Figure 18.10(a). This model shows how the sensing
device will behave if someone makes a connection. An equivalent model can describe the instrumentation
amplier as well, shown in Figure 18.10(/).
The output voltage of the sensing device, \
s
, will then depend on the load of the device. Generally
should the input load be as high as possible and the output load as low as possible. A model of the
complete system is shown in Figure 18.11 with the sensing device (a), the amplier (/), and the load (c).
The current from the sensing device will be:
1
I
=
\
s
1
s
+1
I
giving the input voltage of the amplier to be:
\
I
= 1
I
_
\
s
1
s
+1
I
_
=
1
I
\
s
1
s
+1
I
Let 1
I
1
s
, then \
s
- \
I
.
The output of the amplier will then be:
\
J
=
1
J
G\
I
1
o
+1
J
=
1
J
1
o
+1
J
G
1
I
\
s
1
s
+1
I
(18.1)
CHAPTER 18. SIGNAL CONDITION SYSTEMS 114
Figure 18.12: A dividing network of resistors used for signal attenuation.
Let 1
I
1
s
and 1
J
1
o
, then an approximate of the Equation 18.1 will be:
\
J
= G \
S
18.2 Attenuation
In some situations will the amplied signal be higher then the input range for the next device and the
output signal should be reduced, also known as attenuation. A diving network of resistors can be used
for signal attenuation and is shown in Figure 18.12.
The current 1
I
will be:
1
I
=
\
I
1
1
+1
2
and the output voltage will be:
\
o
= 1
2
1
I
= 1
2
_
\
I
1
1
+1
2
_
=
1
2
\
I
1
1
+1
2
Remember that these equations mean that 1
2
<< 1
J
, where 1
J
is any resistance load.
18.3 Filtering
Often will an input signal be complicated, a sum of many dierent frequencies and amplitudes. To be
able to remove some of these frequencies, ltering is used. Two very common situations where ltering
is used are noise and aliasing. Aliasing is regarding sampling and will be discussed later in the course.
Noise is unused frequency components of the signal and a lter can remove these unwanted components.
Filters can either be hardware devices (combinations of resistors, capacitors and/or inductances)
or implemented in software. Hardware lters are used if ltering is necessary before the measurement
(DAQ) system, software lters can only be implemented on the signal after the digital conversion.
There are four dierent type of lters:
1. low-pass lter; see Figure 18.13 (a), passband will be low frequency, stopband in high frequency,
2. high-pass lter; see Figure 18.13 (b), passband will be high frequency, stopband in low frequency
3. band-pass lter; see Figure 18.13 (c),
4. band-stop lter; see Figure 18.13 (d),
As shown in Figure 18.13 the lters have a corner frequency, )
c
, indicating the frequency for the
signal attenuation. A very large number of hardware lters exist, four classes of lters are most used.
Each lter class has unique characteristics that make them suitable for a particular application. These
four classes are:
1. Butterworth; maximally at in the passband but not a crisp cut-o in the stopband, see Figure
18.14,
2. Chebyshev; crisp cut-o in the stop band but ripples in the passband, see Figure 18.15,
3. elliptic; a very crisp transition between passband and stop band, but has ripples in both passband
and stopband,
CHAPTER 18. SIGNAL CONDITION SYSTEMS 115
Figure 18.13: Four dierent type of lters: (a) lowpass lter; (/) highpass lter; (c) bandpass lter; (d)
bandstop lter (Wheeler & Ganji 2004).
4. Bessel; a good linear variation of the phase angel with frequency in the passband, but have a lower
roll-of rate than Butterworth lters. See Figure 18.16.
A common characteristic is the lter order. The higher the order, the greater will the attenuation
outside the corner frequencies be (in the stop band).
18.3.1 Low pass lter
A low pass lter is used for smoothing a set of the last input values (present values) and will give some
delay of the actual input signal. The dierences of the present value, a ltered value, a predicted value,
and a trend value is shown in Figure 18.17.
The low pass lter is shown in Figure 18.18 where the cut o frequency, )
c
, is where the amplitude
is 3d1 or
1
p
2
.
Hardware low pass lter
The simplest and cheapest lter available is a single pole RC lter, a resistor (1) in serial with the signal
and a capacitor (C) between the signal and ground. The lter rolls o at 6 dB per octave (20 dB per
decade) above the corner frequency at:
)
c
=
1
21C
A simple low pass lter can be designed using only a resistor and a capacitor as shown in Figure
18.19. The time constant for the lter will be:
t =
1
.
cu||o}}
= 1 C
In discrete-time will a low pass lter be:
j (/) = (1 c) j (/ 1) +cn(/)
CHAPTER 18. SIGNAL CONDITION SYSTEMS 116
Figure 18.14: Gain of a lowpass Butterworth lter as a function of lter orders and frequency (Wheeler
& Ganji 2004).
Figure 18.15: Gain of a lowpass Chebyshev lter as a function of lter orders and frequency (Wheeler &
Ganji 2004).
Figure 18.16: Phase angle variation with frequency for Bessel and Butterworth classes of lters (Wheeler
& Ganji 2004).
CHAPTER 18. SIGNAL CONDITION SYSTEMS 117
Figure 18.17: Filter values are delayed values of the present value, smooting several of the last present
values. Past values are used for trending and predictiable values are for estimation in the future.
Figure 18.18: The response of a low pass lter.
Figure 18.19: A rst order LP lter using a resistor and a capacitor.
CHAPTER 18. SIGNAL CONDITION SYSTEMS 118
Figure 18.20: The input (solid draw) signal and the output (dotted line) signal for an IIR low pass lter.
Figure 18.21: A moving average lter.
where the lter constant c is a number between 0 and 1. The lter constant is:
c =
dT
1C +dT
where dT is the sampling time. Knowing the sampling time and the lter constant, the 1C factor is:
1C = dT
_
1 c
c
_
Software low pass lter
A low pass lter is often implemented as an exponential lter or as a moving average lter. The
exponential lter is an innite impulse response (IIR) (Ifeachor & Jervis 2002) lter meaning that there
is feedback in the lter. The box-car average, moving average and weighted moving average lters are
nite impulse response (FIR) (Ifeachor & Jervis 2002) lters meaning that there is no feedback in these
lters.
The exponential lter will be:
j
|
= 0r
|
+ (1 0) j
|1
where j
|
is the new value and r
|
is the new sensor value. The lter is very easy to implement in software
as only the previous value is needed for each calculation. Figure 18.20 shows the input signal (solid line)
and the output signal (dotted line) of such a lter.
A moving average lter is shown in Figure 18.21.
The lter requires a ring buer in software to save the last : values from the sensor device and the
average calculation for each reading of a new sensor value. The output value j
|
will be:
j
|
=

I=1
r
I

where j
|
will be the lter output value at step / estimated from the sensor device values. Figure
18.22 shows the input signal (solid line) and the output signal (dotted line) of such a lter.
CHAPTER 18. SIGNAL CONDITION SYSTEMS 119
Figure 18.22: The input (solid draw) signal and the output (dotted line) signal for an FIR low pass lter.
Figure 18.23: A simple high pass lter using a capacitor (C) and a resistor (R).
18.3.2 High pass lter
The simplest and cheapest lter available is a single pole RC lter, a resistor (1) and a capacitor (C) as
shown in Figure 18.23.
The lter rolls o at 6 dB per octave (20 dB per decade) below the corner frequency at:
)
c
=
1
21C
The time constant for the lter will be:
t =
1
.
cu||o}}
= 1 C
In discrete-time will a high pass lter be:
j (/) = cj (/ 1) +c(n(/) n(/ 1))
where the lter constant c is a number between 0 and 1. The lter constant is:
c =
1C
1C +dT
where dT is the sampling time. Knowing the sampling time and the lter constant, the 1C factor is:
1C = dT
_
c
1 c
_
18.3.3 FIR or IIR lter
The Finite Impulse Response (FIR) lter is based on feedforward while the Innite Impulse Response
(IIR) is based on both feedforward and feedback. A suggestions for using these lters:
IIR: usage if a sharp cut o is wanted and large datarate,
FIR: usage if a linear phase is wanted and small datarate.
A FIR lter is more complicated than a IIR lter, but also more eksibel.
CHAPTER 18. SIGNAL CONDITION SYSTEMS 120
Figure 18.24: A voltage to current converter to the left, a frequency to voltage converter in the middle,
and a frequency to current converter ot the right.
18.4 Dierentiation
A methods to compute the rate at which an output signal \
o
is changing with respect to the input
signal \
I
. The rate of change is called the derivate of \
o
with respect to \
I
. The derivate of a curve will
be the slope of a line that is a tangent to the curve (www.wikipedia.org 2006). One usage will be for
compression (remove the data, keep the information).
18.5 Integration
The integral is the area of a region in the xy-plane bounded by a graph, the x axis, and the vertical
boundary lines (www.wikipedia.org 2006).
=
_
b
o
) (r) dr
Integration means summing the area, not the values. The usage is for sensor devices sensing only changes
of a property, integration can be used to get the total value.
18.6 Linearization
Finding a linear approximation to a function at a given point (www.wikipedia.org 2006).
18.7 Combiner
A combiner can be used for the output, to combine the output signal with:
1. a reference signal to add or subtract an oset,
2. modulation.
Several other ways of combining a set of signals exist as well.
18.8 Conversion
There exists converters for voltage to current, frequency to voltage, and frequency to current. Normally
the DAQ system has voltage inputs, use voltage for short distances and current for larger distances
between the sensor and the measurement system. Block diagrams of some of the converters are shown
in Figure 18.24.
18.8.1 Low-level analog voltage signal
Low-level analog voltage signal, below 100 mV, is common from sensing devices. It is dicult to transmit
such signals over long distance due to noise (ambient electric and magnetic elds can induce voltages in
the signal wires). An instrumentation amplier should be used to make a high level signal.
CHAPTER 18. SIGNAL CONDITION SYSTEMS 121
Figure 18.25: Two systems communicating using voltage, the output voltage \
o
and the input voltage
\
I
.
18.8.2 High-level analog voltage signal
A standard level of an analog voltage signal is 010 V and can be transmitted for a distance of 1030 m
without major problems. The limitation in distance is due to the resistance of the wire, shown in Figure
18.25.
The output voltage is \
o
, the output resistance is 1
o
, the wire resistance is 1
u
, the input resistance
is 1
I
, and the input voltage is \
I
. The 1
I
should be much larger than the 1
o
giving the current:
1 =
\
o
1
u
+1
I
The input voltage will be:
\
I
= 1
I
1 =
\
o
1
I
1
u
+1
I
- \
o
as long as the 1
I
is much larger than the 1
u
.
18.8.3 Current-loop analog signal
The output of the sensor is converted to a current signal instead of a voltage signal. A standard signal is
4 20 mA meaning that the power consumption of the sensor is 4 mA and the range for the measurand
will be 420 mA giving 16 mA. The signal can be transmitted for a distance of up to 3 km without major
problems. See Figure 17.4. The current loop sensor contains a current generator to convert the value
of the measurand to a current signal and the sensor needs a minimum voltage to operate this current
generator. Often this minimum voltage is about 9 V. Most DAQ system has only voltage inputs and
the the current output from the sensor must be converted to a voltage signal. A high precision resistor
is often used for this purpose, the temperature specication is often the most important property of the
high precision resistor.
These sensors are used a lot in the industry and the HART protocol is a way of adding digital
information to the analog signal. A protocol is a set of rules dening how two or several computers
can communicate. The protocol may dene both hardware and software requirements, or only hardware
requirements.
The current-loop signal is more immune to noise then a voltage signal due to the lower input im-
pedance of a current loop signal. A DAQ system with an input range of 0 V to 5 V with a noise signal
of 1 jA will have:
with an input resistance of 1 M will the input noise be:
l
noIst
= 1 1 = 1 M 1 jA = 1 V
being 20% of the voltage range.
with a current signal of 4 20 mA will the input resistance be:
1 =
l
1
=
5 V
20 mA
= 250
giving the input noise of:
l
noIst
= 1 1 = 250 1 jA = 0, 25 mV
being 0, 005% of the voltage range.
CHAPTER 18. SIGNAL CONDITION SYSTEMS 122
18.8.4 Digital signal
The best way is to convert the analog signal to a digital signal as close to the sensing device as possible
to avoid noise problems. The digital signal is only a set of voltage pulses being 0 if below a voltage limit
and being 1 if above a voltage limit including an illegal band between these voltage limits. Then these
signals will be more immune to noise problems.
The most used standards for transmitting digital signals are the RS-232C, RS-422 and RS-485 stan-
dards. RS-232C is limited to about 10 m, while RS-485 is limited to about 1.2 km. Standards as USB,
FireWire and Ethernet are all using dierent types of RS-485 as transporting medium.
The transmitter and receiver needs a protocol to communicate and dierent protocol exists. In the
process industry a set of eldbuses exists with dierent type of protocols like Probus, CAN bus, and
Fieldbus Foundation. These eldbuses are all digital buses.
18.9 Noise
In transmitting analog signals across a process plant or factory oor, one of the most critical requirements
is the protection of data integrity. However, when a data acquisition system is transmitting low level
analog signals over wires, some signal degradation is unavoidable and will occur due to noise and electrical
interference. Noise and signal degradation are two basic problems in analog signal transmission. Noise
is consider to be be any measurement that is not part of the phenomena of interest.
Noise can be categorized into two broad categories:
1. internal noise
2. external noise.
While internal noise is generated by components associated with the signal itself, external noise results
when natural or man-made electrical or magnetic phenomena inuence the signal as it is being trans-
mitted. Noise limits the ability to correctly identify the sent message and therefore limits information
transfer.
Some of the sources of internal and external noise include:
1. Electromagnetic interference (EMI);
2. Radio-frequency interference (RFI);
3. Leakage paths at the input terminals;
4. Turbulent signals from other instruments;
5. Electrical charge pickup from power sources;
6. Switching of high-current loads in nearby wiring;
7. Self-heating due to resistance changes;
8. Electrical motors;
9. High-frequency transients and pulses passing into the equipment;
10. Improper wiring and installation;
11. Signal conversion error;
12. Uncontrollable process disturbances.
Electronic noise exists in all circuits and devices as a result of thermal noise, also referred to as
Johnson Noise. The lower the temperature the lower is this thermal noise. Semiconductor devices can
also contribute icker noise and generation-recombination noise. In any electronic circuit, there also exist
random variations in current or voltage caused by the random movement of the electrons carrying the
current as they are jolted around by thermal energy (www.wikipedia.org 2006).
Figure 18.26 shows the typical noise sources in a measurement system.
Advises to avoid noise:
CHAPTER 18. SIGNAL CONDITION SYSTEMS 123
Figure 18.26: Some possible noise sources in a measurement system.
1. Use shielded cables,
2. Terminate the shielding only in one end of the cable,
3. Terminate all shieldings and groundings in one point,
4. Try to separate analog and digital grounding terminations,
5. Use current loop for transmitting analog signals,
6. Convert to digital signals as close to the data source as possible,
7. Use high quality power suppliers.
Chapter 19
Data Acquisition Systems
19.1 Introduction
The Data Acquisition (DAQ) System is the connection between the sensor devices, the actuator devices,
and the computer system. One of the main purpose is to convert analog signals from the real world to
digital representations for computer systems. Figure 19.1 shows a DAQ system with sensors connected.
The common subsystems of a DAQ systemare:
1. Analog input; signals from analog sensors,
2. Analog output; signals to analog actuators,
3. Digital input; signals from on/o sensors,
4. Digital output; signals to on/o actuators,
5. Counters; counting the frequency, period, or the number of events,
6. Timers; output event controls or pulse train generation.
It can be dierent types of modules connected to the computers I/O ports (parallel, serial, PCMCIA,
USB, FireWire, SCSI, network, wireless...) or cards inserted into the slots (PCI, ISA) on the motherboard
of the computer. An overview of the connections are shown in Figure 19.2. The connection to the
computer can be an internal card or an external device, while the sensor connections most often will be
an external box.
Important factors of a DAQ system:
1. Interface: The connection between the DAQ system and the computer system. An internal or
external system, connected to the intranet or internet? Cable or wireless?
2. Signal conditioning: The analog to digital conversion of the input signals and the digital to analog
conversion of the output signals,
3. Number of analog channels: The number and range of the analog input and output signals,
Figure 19.1: A DAQ system with a set of sensors connected.
124
CHAPTER 19. DATA ACQUISITION SYSTEMS 125
Figure 19.2: The connections of the sensors and the DAQ system to a computer.
4. Sampling rate: The time to convert an analog signal to a digital signal,
5. Resolution: The smallest value change the system can detect.
6. Accuracy: This is a function of many variables in the system, including A/D nonlinearity, amplier
nonlinearity, gain and oset errors, drift, and noise.
7. Digital I/O: The number of digital input and output signals.
An example of a complete system is shown in Figure 19.3. This system contains a set of sensors,
the measurement system, the motor control system, and the MMI (Man Machine Interface). This is a
distributed system containing a set of microcontroller modules and serial communication between the
modules. The type of serial communication will decide the maximum distances between the dierent
modules.
Figure 19.4 shows the general structure of a measurement system containing the following devices:
1. the sensing device; part of the sensor device,
2. the signal conditioning device; normally part of the sensor device,
3. the signal processing device; normally part of the DAQ system,
4. the data presentation device; normally part of the computer system.
The signal processing device will be the DAQ system collecting the values from the sensors, most
often analog signals, converting these signals to digital representation of these analog signals, and may
be some preprocessing of the digital signals as well. An overview of the main components for the analog
input section of a DAQ system, where the analog to digital conversion (ADC) is part of the DAQ, is
shown in Figure 19.5.
The DAQ system, with an ADC, will normally consists of the following components:
1. ADC; an important part of the DAQ system converting the analog signal to a digital representation
of the analog signal. The ADC is often an expensive component so the DAQ system normally only
have one ADC.
2. Mux; the multiplexer is used to be able to connect more sensors to only one analog to digital
converter ADC, often 4, 8, 16, or 32 inputs connected to one ADC.
3. jC, microcontroller used for controlling the multiplexer and the ADC.
(a) The multiplexer must be connected to the right sensor,
(b) the ADC must be told to start the conversion. The conversion from analog to digital will
always take some time ( js or ms),
(c) the ADC will inform the microcontroller when the conversion is nished,
(d) the converted digital value will be read from the ADC,
(e) select the next channel of the multiplexer.
CHAPTER 19. DATA ACQUISITION SYSTEMS 126
Figure 19.3: The system blocks of a distrubuted system containing sensors, measurement system, control
system, and MMI (Man Machine Interface) (Cravotta 2008).
Figure 19.4: General structure of a measurement system for a single sensor device (Bentley 2005).
Figure 19.5: An overview of the main components for the analog input section for a DAQ system for
reading sensor device values.
CHAPTER 19. DATA ACQUISITION SYSTEMS 127
Figure 19.6: The conversion of the continuous analog signal to a discrete signal at a specic time. The
arrows on the top indicate the specic times for each sensor.
Figure 19.7: The electrical representation of digital information in a computer.
4. Signal condition; some additional conversion of the digital value.
The conversion from analog to digital is called sampling, reduction of a continuous signal to a discrete
signal. Sampling means to get a value at specic time in the time domain. Figure 19.6 shows the
conversion from a analog continuous signal to a discrete signal using a MUX and a ADC.
19.2 Digital representation of numbers
Numbers used by human are normally represented in base 10 (decimal), but is not practical for a
computer. A more practical base for computers is base 2 as this base can be represented as a signal as
voltage or not. A voltage will indicate a 1 and no voltage (0 V) will indicate a 0. See Figure 19.7
for the electrical levels where +\ indicate a 1 and 0\ indicate a 0. The dotted lines are the limit
voltages for detecting a 1 or a 0.
19.2.1 Integers
A number can then be represented by a set of voltages and the conversion between the base 10 and base
2 will be (in the range [0, 256]):

10
= /
7
_
2
7
_
+/
6
_
2
6
_
+/
5
_
2
5
_
+/
4
(24) +/
3
_
2
3
_
+/
2
_
2
2
_
+/
1
(21) +/
0
_
2
0
_
where is the number in base 10 and /
n
is the dierent bit numbers for base 2. The highest bit (/
7
) is
called the Most Signicant Bit (MSB) and the lowest bit (/
7
) is called the Least Signicant Bit (LSB).
The highest bit will be the same as the word size of the computer, most used is 8 bits, 16 bits, 32 bits,
or 64 bits. It is common to break long binary numbers up into segments of 8 bits.
Example 11 Convert the binary number 00100101
2
to a digital number. The result is 37
10
The conversion from a number of base 10 to a number of base 2 is dividing the number of 2 and using
the remainder as the base 2 numbers.
Example 12 The 8 bit representation of 101
10
will be:
CHAPTER 19. DATA ACQUISITION SYSTEMS 128
Number Remainder Bit value Value Bit
101,2
50,2 1 1 1 0 LSB
25,2 0 2 1
12,2 1 4 4 2
6,2 0 8 3
3,2 0 16 4
1,2 1 32 32 5
0 1 64 64 6
0 0 128 7 MSB
101
The result is the remainders starting with LSB given that 101
10
= 01100101
2
. These numbers can
represent only positive decimals integers, how to represent negative decimal integers?
Negative numbers are normally represented by 2s compliment meaning that a 8 bits integer will
represent the range [128, 127] instead of the range [0, 256]. The way of converting a negative decimal
number using 2s compliment is:
1. convert the integer to binary as if it were a positive integer,
2. invert all the bits, change all 0 to 1 and all 1 to 0,
3. add 1 LSB to the nal result.
Example 13 The 8 bit binary representation of 101
10
and 6
10
will then be:
101
10
6
10
1 101
10
= 01100101
2
0110
2
2 10011010 1001
3 10011011 1010
The MSB is normally the indication of positive or negative decimal number, the number is negative
if MSB=1, and positive if MSB=0.
An integer of 8 bits is limited to the range of [128, 127] or [0, 255] depening on signed or unsigned
integers. Larger ranges require more bytes, and word integers and long integers can be used. Word
integers are often limited to 16 bits and long integers are limited to 32 bits, but may depend on the CPU
architectures. A 64 bits CPU architecture may have dierent limitation than a 32 bits CPU architecture.
The CPU architectures also dier in the order of the bytes in mulitbyte integers. The orders can be
either Little Endian or Big Endian.
Little Endian means that the low order byte is stored in the rst (lowest) address and the high order
byte in the last (highest) address. Big Endian means that the low order byte is stored in the last address
and the high order byte in the rst address. Intel CPU architectures (used in most PCs) are using Little
Endian architecture, while Motorola CPU architectures (used in many MACs) are using Big Endian
architecture.
A long integer may consist of four bytes, byte 0, byte 1, byte 2 and byte 3. Byte 0 is the low order
byte and byte 3 is the high order byte. The long integer value will be:
a|nc
lon
= 1jtc_0 + 256 + (1jtc_1 + 256 + (1jtc_2 + 256 + (1jtc_3)))
but these bytes will be stores dierently depending on the Endian architecture. The address positions
in memory will be:
Endian Adr#0 Adr#1 Adr#2 Adr#3
Little Byte 0 Byte 1 Byte 2 Byte 3
Big Byte 3 Byte 2 Byte 1 Byte 0
Using 2s compliment we can represent positive and negative integers, but what about oating points?
CHAPTER 19. DATA ACQUISITION SYSTEMS 129
Figure 19.8: The separation of the sign, mantissa, and the exponent in oating point number
(www.wikipedia.org 2006).
19.2.2 Floating numbers
Floating points are represented by separating 3 parts of the number, the sign, the mantissa, and the
exponent. The number
4.28
3.2
will have a negative sign, 4.28 will be the mantissa (fraction), and 3.2 will be the exponent. See Figure
19.8 for the separation of the dierent bits in a oating point number.
Floating points can be represented by single or double precision and both representations are stan-
dardized from IEEE (IEEE 754).
The data for the single and double precision are listed below:
Single Double
Size in bits 32 64
Sign bit 1 31 1 63
Exponent bits 8 23 30 11 52 62
Mantissa bits 23 0 22 52 0 51
Value - 2.5 10
38
- 1.8 10
308
The exponent is biased by (2
1
) 1, where is the number of bits for the exponent. This means
that for single precision will the exponent be adjusted by 127 . If the exponent is 6, will the biased
exponent be 6 + 127 = 133 and the mantissa will be adjusted according to the exponent.
19.3 ASCII codes
The computer is using binary signals, but base 2 numbers will be large numbers. Instead base 8, octal,
or base 16, hex, are used. In octal groups of 3 and 3 bits are used and in hex groups of 4 and 4 bits are
used as a number.
The conversion between base 2, 8, 10, and 16 for the range [0
10
, 15
10
]:
Base 2 8 10 16
0000 00 00 00
0001 01 01 01
.. .. .. ..
0111 07 07 07
1000 10 08 08
1010 12 10
.. .. .. ..
1111 17 15 1
When computers are communicating with the outside world there must some sort of protocol
1
for
dening the conversion between binary numbers and numbers for the outside world. These numbers
will be dierent from the binary numbers and the most common code is ASCII, American Code for
Information Interchange. In this code 8 bits represent 256 characters. Today Unicode is used to extend
1
Protocol: A set of rules (hardware and/or software) dening how to exchange the information between two systems.
CHAPTER 19. DATA ACQUISITION SYSTEMS 130
Figure 19.9: The input and output sections of a DAQ system. The left side is the input section (analog
and digital), the right part is the output section (analog and digital), and the lower section in the middle
is the computer I/O (digital bus).
the ASCII code for dierent types of languages and type of media. An example of the ASCII codes are
[Character to Base 10 Code] :
Character Base 10 Character Base 10 Character Base 10
<Space 32 0 48 A 65
! 33 1 49 .. ..
( 40 .. .. Z 90
) 41 8 56 .. ..
+ 43 9 57 a 97
- 45 .. .. .. ..
The codes 0-31 are special control characters.
19.4 DAQ parts
The main parts of a DAQ system will be the the analog to digital converter (ADC) as shown in Figure
19.5. Often a DAQ system also contains support for digital inputs and outputs and counters as well, see
Figure 19.9 for a complete DAQ system.
19.4.1 Counters
Counters are input and output signals that can be used for timing purposes. The input counters can be
used to count a number pulses or measuring the time between any changes of the input signals. These
signals are often connected to internal counters or the interrupt system. The output counters can be
used for generating frequency signals.
19.4.2 Digital inputs
Digital input signals are only ON/OFF signals and often will the number of digital input signal be a
multiply of the data width for the system. This means that the number of digital inputs can for example
be 8, 16, 24, or 32. Digital inputs can be used if the process change is a change between two states. This
should be converted to an electrical signal being on or o.
Another type of digital input useful in data acquisition applications is the hardware trigger. This
allows an external event, often as an interrupt signal to the system.
Important aspects of digital inputs are the:
1. input range of the digital inputs,
CHAPTER 19. DATA ACQUISITION SYSTEMS 131
Figure 19.10: A digital input using only GND as reference, the extern input will be independent of the
voltage in your measurement system.
2. input current,
3. noise protection.
The 0 V (or ground) signal is a good references and by using pull-up resistors and a diode, only the
0 V signal can be used as the only reference. A capacitor can also be used for noise reduction and/or for
bouncing. An example of a digital input is shown in Figure 19.10.
19.4.3 Digital outputs
Digital output signals are just like the digital input signals, only ON/OFF signals. Digital outputs are
used to control equipment that are going to turned only ON and OFF. Often these outputs will control
relays that can control any type of signal. A relay can be used both as Normally Open (NO) and
Normally Closed (NC) devices. Important aspects of digital outputs are the:
1. output range of the digital outputs,
2. output current.
Often will an output amplier be used to amplify the output current for controlling the relay. This
amplier can be a digital inverter or a single transistor. However, always remember to add a diode with
the relay. An example of a digital output with an amplier, a diode, and a relay is shown in Figure
19.11.
This Figure also shows the usage of a transistor for amplication of the digital output signal from
your measurement system as these signals often have low current driving capacity. The digital output is
often 5 25 mA, while a relay often need 100 500 mA to operate.
The relay is an inductor and may create problems when the transistor (or any other switch) is turning
the current o. The voltage across the inductor in the relay is:
= 1
_
di
dt
_
where 1 is the inductance of the relay and i is the current owing in the inductor. When the current is
switched o, the voltage across the actuator can become very high during the switching phase (Olsson
& Piani 1998). The diode in Figure 19.11 is used to reduce the voltage spikes. Figure 19.12 shows the
current in the actuator to the left and the voltage across the actuator to the right.
19.4.4 Multiplexer
The multiplexer is an electronic switch, used to select the right input channel for the analog signal to
digital conversion. The MUX seems as a simple device, but one important property is the crosstalk.
Crosstalk is the interference between the channels of the MUX meaning that the input on one channel
will not be the same as the output. The reasons may be interference between the input channels. This
crosstalk property should be as low as possible, giving that the interference between the channels is low.
CHAPTER 19. DATA ACQUISITION SYSTEMS 132
Figure 19.11: A digital output using a relay, a diode for protection, and a transistor for amplication of
the digital output signal from your measurement system.
Figure 19.12: The current and voltage in an inductive actuator when turned o (Olsson & Piani 1998).
CHAPTER 19. DATA ACQUISITION SYSTEMS 133
Figure 19.13: A resistor ladder for a digital to analog converter (www.wikipedia.org 2006).
Figure 19.14: The output range of a DAC in a DAQ system.
19.4.5 Digital to Analog Converters
In a process system the output of control values can be just as important as input for sensor signals and
the DAQ system can be used for both input of analog values and output of analog values.
A Resistor Ladder, or R-2R Ladder is the most simple and inexpensive way to perform digital-to-
analog conversion, using repetitive arrangements of precision resistor networks in a ladder-like congu-
ration (www.wikipedia.org 2006). Other types of DAC exists as well, but normally more simple than a
ADC. A resistor ladder DAC is shown in Figure 19.13
The digital inputs or bits (Bit 0 to Bit 4) range from the most signicant bit (MSB) to the least
signicant bit (LSB). The bits are switched between either 0V or VREF and depending on the state and
location of the bits OUT will vary between 0V and VREF. VREF will be the same voltage as for a logic
1. See Figure 19.14.
The detailed output of the DAC is shown in Figure 19.15 will be steps and the number of steps and
resolution depends on the number of bits.
The output voltage from the DAC will be:
\
o
=
\
1.
\
11
2

1
u
where [\
11
, \
1.
] is the output range of the DAC and 1
u
is the digital output value.
Figure 19.15: The output of a DAC.
CHAPTER 19. DATA ACQUISITION SYSTEMS 134
Example 14 1
\
= 128, = 8, \
1.
= 5 V, \
11
= 0 V gives \
O
= 2.5 V
DAC specications:
1. Settling time: Period required for a D/A converter to respond to a full-scale set point change.
2. Linearity: This refers to the devices ability to accurately divide the reference voltage into evenly
sized increments.
3. Range: The reference voltage sets the limit on the output voltage achievable.
4. Output control: ampliers and signal conditioners often are needed to drive a nal control element.
5. Output lter: A low-pass lter may also be used to smooth out the discrete steps in output.
19.4.6 Analog to Digital Converter
The analog to digital converter is used for converting the analog signal from the output of the MUX to
a digital representation of the analog signal. A set of problems arise when doing the analog to digital
conversion like:
1. the digital conversion will not be an exact representation of the analog signal,
2. the conversion will take some time so the analog signal must be stored while converting.
The representation of the analog signal will very seldom be exact due to the resolution of bits.
Assuming an analog signal in the range of [0\, 10\ ] and only 2 bits resolution of the system. 2 bits is
2
2
= 4 numbers (0, 1, 2, 3) so the analog voltage range must be divided into 4 steps giving
10\ 0\
4
= 2.5\
for each step. The conversion will the be:
Value LSB MSB Number
[0\, 2.5\ ] 0 0 0
[2.5\, 5\ ] 1 0 1
[5\, 7.5\ ] 0 1 2
[7.5\, 10\ ] 1 1 3
In general will the output of an ADC have 2

possible values where is the number of bits used


for conversion in the ADC. Normally will an ADC in a practical solution have 8, 12, 14, or 16 bits. The
resolution for the converter will be:
1 =
Hiq/ 1on
2

ADC can vary widely, but there is 4 important properties for the converter:
1. the number of bits used for conversion. The greater the number of bits, a more accurate presentation
of the analog input (8,10,12,14,16,18,20,24 bits),
2. the input range. The input range can be unipolar (0\ to + 10\ or 5\ to 0\ ) or bipolar
(5\ to + 5\ ) ,
3. the reference voltage of the converter. This voltage will be part of the accuracy of the converter
(often given as ppm/

C),
4. the conversion speed, the time for converting the analog input to a digital representation ( js or
ms).
The Figure 19.16 shows the details of an A/D converter consisting of an analog section and a digital
section. The analog signal is feed to the converter, the converter is started by a Start Conversion signal.
The converter will report the Conversion Finished signal when the digital representation of the analog
value is available in the digital section of the converter. The Read Value signal is used for reading the
digital value from the converter. The reference voltage is an important property for the accuracy of the
A/D converter. On some converters this will be controlled by internal logic, on other converts will this
be external logic.
CHAPTER 19. DATA ACQUISITION SYSTEMS 135
Figure 19.16: The details of an Analog to Digital Converter.
Figure 19.17: The principles for a unipolar single-slop analog to digital converter (Wheeler & Ganji 2004).
CHAPTER 19. DATA ACQUISITION SYSTEMS 136
Dierent types of ADC exist, lets use a unipolar single-slope integrating converter to demonstrate
the analog to digital conversion process. A block diagram of the converter is shown in Figure 19.17 with
the analog input signal on the top left side.
A start signal, lower left side, will start the conversion. The start signal will:
1. set the lock of the input signal as the input signal must be constant during the conversion,
2. reset the digital output of the counter,
3. reset the integrator,
4. reset and start the counter using the clock input.
The clock signal is used for the counter to let the digital output be a presentation of the analog value.
In parallel with the counter will the integrator start rising a voltage and the output from the integrator is
compared with the analog value. When the integrator has a higher voltage than the analog voltage, the
counter will stop and the digital representation of the analog value can be read from the digital output
of the counter.
The comparator will normally contain a sample and hold on the analog value input to keep the
analog value at a constant value while conversion is performed. The conversion time will depend on the
speed of the integrator and often there is a relation between the speed and resolution as shown:
Speed Low Medium High
Precision
Low Interpolation
Folding
Medium Successive approximation
Algorithm
High Integration
Oversampling
Sigma Delta
The most used types of ADC are:
1. Sigma-Delta ADC,
2. successive-approximation ADC.
Successive-approximation
A successive-approximation ADC is the most common used type of analog to digital converters. A DAC
is used for converting a digital representation to a analog value. A DAC will be very simple compared to
a ADC. The principle of successive approximation is shown in Figure 19.18, using a comparator, DAC,
and a control module including a binary counter. The main dierent from the single-slop integrator is
the usage of the DAC instead. The comparator also includes a sample and hold circuit (S&H) to hold the
input value while searching. The binary counter starts with the most signicant bit (MSB) and works
towards the least signicant bit (LSB) using the clock input, giving a fast conversion. The DAC converts
the output of the binary counter to an analog value which is compared with the analog input value in
the comparator. When these analog values are approximately equal, the counter stops and the binary
counter value is available as the digital output value.
This design oers an eective compromise among resolution, speed, and cost. In this type of design,
an internal DAC converter and a single comparator are used to narrow in on the unknown voltage by
turning the bits in the DAC until the voltages match within the least signicant bit. Raw sampling
speed for successive approximation converters is in the range of 50 kHz to 1 MHz.
Sigma-delta
A sigma-delta ADC uses a 1-bit DAC, ltering, and oversampling to achieve very accurate conversions.
The conversion accuracy is controlled by the input reference and the input clock rate.
CHAPTER 19. DATA ACQUISITION SYSTEMS 137
Figure 19.18: The principle of a successive approximation A/D converter.
The primary advantage of a sigma-delta converter is high resolution. The ash and successive approx-
imation ADCs use a resistor ladder or resistor string. The problem with these is that the accuracy of the
resistors directly aects the accuracy of the conversion result. Although modern ADCs use very precise,
laser-trimmed resistor networks, some inaccuracies still persist in the resistor ladders. The sigma-delta
converter does not have a resistor ladder but instead takes a number of samples to converge on a result.
The primary disadvantage of the sigma-delta converter is speed. Because the converter works by
oversampling the input, the conversion takes many clock cycles. For a given clock rate, the sigma-delta
converter is slower than other converter types. Or, to put it another way, for a given conversion rate,
the sigma-delta converter requires a faster clock. Another disadvantage of the sigma-delta converter is
the complexity of the digital lter that converts the duty cycle information to a digital output word.
Figure 19.19 shows a simplied Sigma Delta ADC.
Figure 19.20 shows a sigma delta converter from Analog Devices. The A/D converter, AD7190,
consists of 4 analog inputs, a multiplexer, a sigma delta A/D converter and a signal condition unit
converting the digital representation for serial communication of the data. The device also contains a
temperature sensor that can be used as an extra input for temperature compensation.
Integrating
This type of A/D converter integrates an unknown input voltage for a specic period of time, then
integrates it back down to zero. This time is compared to the amount of time taken to perform a similar
integration on a known reference voltage. The relative times required and the known reference voltage
then yield the unknown input voltage. Integrating converters with 12 to 18-bit resolution are available,
at raw sampling rates of 10-500 kHz.
Because this type of design eectively averages the input voltage over time, it also smooths out signal
noise. And, if an integration period is chosen that is a multiple of the AC line frequency, excellent
common mode noise rejection is achieved. More accurate and more linear than successive approximation
converters, integrating converters are a good choice for low-level voltage signals.
The solution with the integrator is using more time of the analog signal is closer to the high range,
while the successive approximation is using a kind of binary search and the time will not depend that
much of the analog signal.
Polarity
A DAQ system can convert either unipolar or bipolar signals, or both. A unipolar signal contains only
zero and positive values. A bipolar cotains both zero, negative, and positive values. The input devices
will decide the type of signals and the DAQ system should adopt to the type of signals to be able to
exploit the resolution of the A/D converter. Figure 19.21 shows the dierence of unipolar and bipolar
signals.
CHAPTER 19. DATA ACQUISITION SYSTEMS 138
Figure 19.19: A simplied Sigma delta ADC with examples of the signal levels (www.wikipedia.org 2006).
Figure 19.20: The block diagram of a Sigma Delta analog to digital converter, the AD7190 from Analog
Devices (www.analog.com; FEB-09).
CHAPTER 19. DATA ACQUISITION SYSTEMS 139
Figure 19.21: The dierence of unipolar and bipolar signals.
Range
In bipolar converters 2s complement is used, starts with a binary number of
2
1
2
at the lower end, 0 in
the middle, and
2
1
2
1 as the top range. The output of a 2s complement A/D converter will then be:
1
O
= i:t
_
\
I
\
:l
\
:u
\
:l
2

2
where \
I
is the analog input voltage, \
:u
is the upper value of input range, \
:l
is the lower value of input
range, is the number of bits, and 1
O
is the digital output.
A polar converter will have the output:
1
O
= i:t
_
\
I
\
:l
\
:u
\
:l
2

_
Example 15 If the input voltage is 2\ , the range is [5\, +5\ ], and the number of bits are 8, will the
digital output be:
1
O
= i:t(
2 (5)
5 (5)
2
8
)
2
8
2
= i:t
_
7 256
10
_
128 = 179 128 = 51
19.4.7 Resolution
Central to the performance of an A/D converter is its resolution, often expressed in bits. An A/D
converter essentially divides the analog input range into 2

bins, where is the number of bits.


Since the output of an ADC changes in discrete steps (one LSB) will there be a resolution error, also
known as the quantizing error:
0.51o1
The input resolution error will then be:
0.5
\
:u
\
:l
2

\
The input resolution error for the example above will be (Range [5, 5] and 8 bits):
0.5
_
10
256
_
\ = 19.5 mV
The standard resolution
2
of AD converters exists from about 12 bits to about 22 bits depending
on the price and the conversion time.
Resolution, precision, and accuracy are often mixed. The dierence between resolution and precision;
resolution is the neness to which an instrument can be read and precision is the neness to which an
instrument can be read repeatably and reliably. This means that the dierence between resolution and
2
as of 2008
CHAPTER 19. DATA ACQUISITION SYSTEMS 140
Figure 19.22: The dierence between dierential inputs and single ended inputs using an amplier.
precision is repeatability. The dierence between precision and accuracy is correctness. See the Figures
17.11, 17.13, and 17.12.
19.4.8 Reference Voltage
A voltage reference is an electronic device (circuit or component) that produces a xed (constant) voltage
to the ADC irrespective of the loading on the device, power supply variation and temperature. It is also
known as a voltage source, but in the strict sense of the term, a voltage reference often sits at the heart
of a voltage source (www.wikipedia.org 2006). Voltage references are used in ADCs and DACs to specify
the input or output voltage ranges.
19.4.9 Single-Ended and Dierential Inputs
Another important consideration when specifying analog data acquisition hardware is whether to use
single-ended or dierential inputs. In short, single-ended inputs are less expensive but can be problematic
if dierences in ground potential exist (www.wikipedia.org 2006).
In a single-ended conguration, the signal sources and the input to the amplier are referenced to
ground. This is adequate for high level signals when the dierence in ground potential is relatively small.
A dierence in ground potentials, however, will create an error-causing current ow through the ground
conductor otherwise known as a ground loop (www.wikipedia.org 2006). The input voltage is measured
with reference to ground and compared against the reference voltage.
Dierential inputs, in contrast, connect both the positive and negative inputs of the amplier to both
ends of the actual signal source. Any ground-loop induced voltage appears in both ends and is rejected
as a common-mode noise. The downside of dierential connections is that they are essentially twice as
expensive as single-ended inputs; an eight-channel analog input board can handle only four dierential
inputs (www.wikipedia.org 2006). The input voltage is measured as the dierence between the input
lines and compared against the reference voltage.
Figure 19.22 shows the dierence of these inputs for an amplier showing that DI inputs is not using
the ground as reference for the input signals. DI inputs will also require more input connections as each
input connection has a separate input and return connection. The input signals to the DAQ system is
rst connected to the multiplexer. The number of connection will depend if DI or SE inputs are used,
normally the number of SE inputs will be twice the number DI inputs. The inputs can either be DI or
SE inputs, it is not possible to mix a number of SE and DI inputs on the same DAQ device. If one of
the inputs need to be connected as DI input, all the inputs must be DI inputs. See Figure 19.23.
The advice is to use DI inputs if:
the input signal has a low level, normally less than 1 volt,
the wires connecting the signal is longer than 3 meters,
one of the input signals is using a reference dierent from the ground reference.
19.4.10 Number of channels
It is important to acknowledge that a multiplexer does reduce the frequency with which data points are
acquired, and that the Nyquist sample-rate criterion still must be observed. During a typical data acqui-
CHAPTER 19. DATA ACQUISITION SYSTEMS 141
Figure 19.23: The usage of SE or DI inputs in a DAQ system.
sition process, individual channels are read in turn sequentially. This is called standard, or distributed,
sampling. A reading of all channels is called a scan. Because each channel is acquired and converted at
a slightly dierent time, however, a skew in sample time is created between data points.
19.4.11 Scaling
Because A/D converters work best on signals in the 1-10 V range, low voltage signals may need to be
amplied before conversion-either individually or after multiplexing on a shared circuit. Conversely, high
voltage signals may need to be attenuated.
Ampliers can also boost an A/D converters resolution of low-level signals. For example, a 12-bit
A/D converter with a gain of 4 can digitize a signal with the same resolution as a 14-bit converter with a
gain of 1. Its important to note, however, that xed-gain ampliers, which essentially multiply all signals
proportionately, increase sensitivity to low voltage signals but do not extend the converters dynamic
range.
Programmable gain ampliers (PGAs), on the other hand, can be congured to automatically increase
the gain as the signal level drops, eectively increasing the systems dynamic range. A PGA with three
gain levels set three orders of magnitude apart can make a 12-bit converter behave more like an 18-bit
converter. This function does, however, slow down the sample rate.
19.4.12 Range, Gain and Measured Precision
The range of the DAQ system can often be congured to dierent gains like [10 V, +10 V], [5 V, +5 V],
[2 V, +2 V], and [1 V, +1 V]. The precision will depend on the output signal of the sensor device and
the corresponding input range of the DAQ system. The input range should be as close as possible to
sensor signal to have as good precision as possible.
19.4.13 Software calibration
Sometimes you need an accurate reference, more accurate than the product cost will support. When
manual adjustment is out of the question, the software can compensate for reference voltage variations.
This is typically done by providing a known, precise input, which is used to calibrate the ADC. This
reference can be very precise (and very expensive) because only a few are needed for the production line.
19.4.14 Transfer of A/D conversion to system memory
The A/D converter will use some time to convert the analog signal to a digital representation, and
normally the A/D converter will inform the controller when the conversion has nished. The controller
will read the converted value from the A/D converter and write the value to a specic location in the
system memory. A First In First Out (FIFO) buer can be part of the signal condition unit reading the
converted values from the A/D converter.
The transfer from the A/D converter or the FIFO buer to the system memory can be done in
dierent ways. These are:
CHAPTER 19. DATA ACQUISITION SYSTEMS 142
Figure 19.24: The usage of limit checks and validation checks for a value read from a sensor (Skeie 2008).
1. polling; The controller is waiting for the A/D controller to nish the conversion. This is seldom
used as it is a waste of the controller usage,
2. interrupt; The controller will get a signal from the A/D converter every time the operation has n-
ished. The controller will enter an interrupt function only when informed from the A/D converter,
3. Direct Memory Access (DMA) transfer; The A/D converter needs a memory interface (DMA
controller) and will transfer the converted data to system memory without interference of the
controller. The A/D converter (DMA controller) and the controller can not use the memory at the
same time, normally the A/D controller (DMA controller) will have the priority and the controller
must wait for any memory access when in use by the A/D controller. The controller can however
continue with any other tasks, without accessing the system memory.
19.5 Range check of signal values
Reading values from sensors can give good, wrong, or illegal values. It is important to have some sort of
value checks of the sensor device signals. Some of these value checks are (Pettersen 1984):
1. limit checks; checking that the sensor value is within the valid range of the sensor,
2. validation checks; checking that the sensor value is within a window of the last value,
3. redundancy checks; checking with other sensors.
Limit checks and validation checks can always be used, as they can be included in the software. The
usage of limit checks and validation checks is shown in Figure 19.24.
Chapter 20
Communication
The communication between the sensor devices and the DAQ system can be single cables (point to
point), a bus (multidrop), or wireless. The communication uses some sort of electrical interface and a
prototocol for communication. The protocol is a set of rules dening the hardware and software of the
communication. Figure 20.1 shows the measurement system with the sensor devices, the DAQ system,
and the communication part between these devices. The communication can be based on point to point
connection or bus, either cable or wireless based connections.
20.1 Communication architectures
Figure 20.2 shown a point to point connection to the left, and bus connections in the middle and to the
right. Wireless is a type of bus connection without the wire.
20.1.1 Current loop communication
Current loop is using the same set of cables for both the power and signal. The signal is very immune
to noise and can be used over a long distance. Each sensor must be connected with a set of separate
cables and only one analog signal can be read. Figure 20.3 shows a 4-20 mA sensor device connected to
a measurement system where both the signal and the power to the sensor device is using the same pair
of cables.
The popular 420 mA interface is a current loop communication signal where the analog signal from
the sensor is varying between 4 mA and 20 mA. The sensor device often contains an A/D converter and
a D/A converter to convert the signal from the sensing device signal to the signal for the analog output
device. The signal from the sensing device is normally non-linear and the conversion is also containing
the linearization of the signal. The output of a 4 20 mA device is normally a linear signal.
A pressure sensor device using the 4 20 mA interface is shown in Figure 20.4. The signal from
the pressure sensor is converted to a digital signal, compensated by the temperature (linearized), and
converted to a 4 20 mA signal using the D/A convert and an amplier.
Figure 20.1: The communication from the sensor devices and the measurement system. The transmitter
device of the sensor device will be responsible for the interface and protocol.
143
CHAPTER 20. COMMUNICATION 144
Figure 20.2: Point to point and bus connections.
Figure 20.3: The current loop communication using one pair of cables for both the power and signal for
the sensor device (www.analogservices.com: feb-2010).
Figure 20.4: The block diagram of a 4 20 mA pressure sensor with a sensing device and a temperature
sensor for temperature compensation. The output signal can be with or without the HART interface in
addition to the analog 4 20 mA signal.
CHAPTER 20. COMMUNICATION 145
Current loop communication can be used for both analog and digital signals.
The HART protocol is an extension where a digital signal can be added to the analog signal for more
information on the same connection. Often HART is used in conjunction with the 4 20 mA interface.
20.1.2 Serial communication
Serial communication means mainly RS-485, a simple bus where several devices can be connected at the
same time. The signal are digital signals and the structure must be a master/slave principle. The master
is in charge of the communication at all time, requesting data from the slaves in a cyclic manner. RS-485
denes only the physical interface, the software services will depend on the vendors.
Serial communication can be either point to point communication or bus communication.
20.1.3 Network communication
Network communication will work the same way as a RS-485 bus, but will have a dened software protocol
so sensors from dierent vendors can be used. The protocol will also dene the way the master/slave
principle should work.
Network communication will be bus communication.
20.1.4 Instrument control buses
Dierent instrumentation buses exist and the meaning with these buses are to make I/O abstraction and
instrument abstraction using device drivers. These buses are network based.
1. LXI: LAN based bus,
2. USB: Serial bus,
3. IEEE-1394 / FireWire: Serial bus
4. IEEE-488 / GPIB: Based on the HP Interface Bus (HP-IB), now General Purpose Interface Bus:
8-bit parallel bus, maximum 15 devices: Parallel port,
5. PXI: Computer bus, Peripheral Component Interconnected Extended, internal cards,
6. PCMCIA; Personal Computer Memory Card International Association, often one or two slots on
a PC, parallel port.
20.1.5 Wireless communication
Wireless communication will be network communication without the cables. Interconnecting several
wireless sensors the system will be a wireless sensor network (WSN). The wireless system however require
a gateway for the connection between the sensors and the DAQ system.
Advantages of wireless communication:
1. Avoid cabling,
2. Easy to install new sensor devices.
Each device in network is called node and a node in an active network must consists of a transmit-
ter/receiver unit (radio), a microcontroller unit, a sensor device, and a power supply unit. This is shown
in Figure 20.5.
Wireless communication can be standalone nodes or nodes connected in a network. A network of
sensor nodes is called a sensor network, and a wireless sensor network when wireless nodes are used. A
sensor network is a collection of sensor devices cooperation to measure the sensing parameters and a
single point of connection.
To take into considerations when using wireless communication and wireless nodes:
1. Power consumption,
2. Number of nodes in the network,
CHAPTER 20. COMMUNICATION 146
Figure 20.5: The contents of an active node in a wireless sensor network.
Figure 20.6: The releationship between wireless standards, the data rate, the range, and the type transfer
objects (www.iar.com: dec-08).
3. Security.
Some of the standard wireless protocols are shown in Figure 20.6. This Figure shows the relationship
between some of the wireless standards, the data rate, the range, and the type of transer objects.
The ranges are divided intor personal area network (PAN), local area network (WLAN), Wide Area
Networks (WWAN). Short range regarding the gure is 10 m to 100 m and long range may be kilometers.
Low datarate is kilo bytes (Kbytes) and high data rate is up to giga bytes (Gbytes).
A set of standards exists like RFID, Bluetooth, ZigBee and WiFi:
1. RFID is a passive sensor, only transmitting the information when asked by a transmitter.
2. Bluetooth is an active sensor system, and has an limitation of 8 concurrent nodes, one master and
7 slaves.
3. ZigBee is an active sensor system. The maximum number of nodes for ZigBee is 65535 with dierent
types of architectures. One architecture used a lot is the Mesh network meaning that a node in
the network is only connecting to the nearest neighbour nodes.
20.2 Wireless Sensor
The motivations for using wireless technology are:
CHAPTER 20. COMMUNICATION 147
Figure 20.7: The structure for a bar code (www.taltech.com: jun-09).
1. no need for a cable,
2. installation in remote and hostile areas,
3. temporary and mobile installations,
4. provides larger exibility,
5. enables new type of applications.
20.2.1 Bar Codes
Bar codes are like a printed version of the Morse code. Dierent bar and space patterns are used to
represent dierent characters. Sets of these patterns are grouped together to form a symbolog. There
are many types of bar code symbologies each having their own special characteristics and features. Most
symbologies were designed to meet the needs of a specic application or industry.
Bar codes provide a simple and inexpensive method of encoding text information that is easily read
by inexpensive electronic readers. Bar coding also allows data to be collected rapidly and with extreme
accuracy. A bar code consists of a series of parallel, adjacent bars and spaces. Predened bar and space
patterns or "symbologies" are used to encode small strings of character data into a printed symbol. Bar
codes can be thought of as a printed type of the Morse code with narrow bars (and spaces) representing
dots, and wide bars representing dashes. A bar code reader decodes a bar code by scanning a light source
across the bar code and measuring the intensity of light reected back by the white spaces. The pattern
of reected light is detected with a photodiode which produces an electronic signal that exactly matches
the printed bar code pattern. This signal is then decoded back to the original data by inexpensive
electronic circuits. Due to the design of most bar code symbologies, it does not make any dierence if
you scan a bar code from right to left or from left to right.
The basic structure of a bar code consists of a leading and trailing quiet zone, a start pattern, one
or more data characters, optionally one or two check characters and a stop pattern. Figure 20.7 shows
the basic structure of a bar code.
There are a variety of dierent types of bar code encoding schemes or "symbologies", each of which
were originally developed to fulll a specic need in a specic industry. Several of these symbologies
have matured into de-facto standards that are used universally today throughout most industries.
Bar Code reader
Pen type readers consist of a light source and a photo diode that are placed next to each other in the
tip of a pen or wand. There are several dierent types of bar code readers available, using a slightly
dierent technology. These are pen type readers (e.g. bar code wands), laser scanners, CCD readers and
camera-based readers.
Pen type readers and laser scanners; The photo diode measures the intensity of the light reected
back from the light source and generates a waveform that is used to measure the widths of the
bars and spaces in the bar code. Dark bars in the bar code absorb light and white spaces reect
light so that the voltage waveform generated by the photo diode is an exact duplicate of the bar
and space pattern in the bar code. This waveform is decoded by the scanner in a manner similar
to the way Morse code dots and dashes are decoded.
CCD readers; Use an array of hundreds of tiny light sensors lined up in a row in the head of the
reader. Each sensor can be thought of as a single photo diode that measures the intensity of the
CHAPTER 20. COMMUNICATION 148
Figure 20.8: An UPC bar code with product=6, manufactor code=39382, product type=00039, and
checksum=3. (www.howstuworks.com: jun-09).
light immediately in front of it. The important dierence between a CCD reader and a pen or laser
scanner is that the CCD reader is measuring emitted ambient light from the bar code whereas pen
or laser scanners are measuring reected light of a specic frequency originating from the scanner
itself.
Camera based readers; Use a small video camera to capture an image of a bar code. The reader
then uses a digital image processing techniques to decode the bar code. Video cameras use the
same CCD technology as in a CCD bar code reader except that instead of having a single row of
sensors, a video camera has hundreds of rows of sensors arranged in a two dimensional array so
that they can generate an image.
Barcode protocols
Code 39 The Normal CODE 39 is a variable length symbology that can encode the following 44 char-
acters: 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ-. *$/+%. Code 39 is the most popular
symbology in the non-retail world and is used extensively in manufacturing, military, and health appli-
cations. Each Code 39 bar code is framed by a start/stop character represented by an asterisk (*). The
Asterisk is reserved for this purpose and may not be used in the body of a message.
The FULL ASCII version of Code 39 is a modication of the NORMAL (standard) version that
can encode the complete 128 ASCII character set (including asterisks). The Full ASCII version is
implemented by using the four characters: $/+% as shift characters to change the meanings of the rest
of the characters in the Normal Code 39 character set.
UPC Universial Production Code. UPC-A is a 12 digit, numeric symbology used in retail applications.
UPC-A symbols consist of 11 data digits and one check digit. The rst digit is a number system digit
that normally represents the type of product being identied. The following 5 digits are a manufacturers
code and the next 5 digits are used to identify a specic product. Figure 20.8 shows an example of a
UPC bar code.
The coding of the numbers are:
Number Width
0 3 2 1 1
1 2 2 2 1
2 2 1 2 2
3 1 4 1 1
4 1 1 3 2
5 1 2 3 1
6 1 1 1 4
7 1 3 1 2
8 1 2 1 3
9 3 1 1 2
CHAPTER 20. COMMUNICATION 149
The bar code starts with the standard start code of 1-1-1 (bar-space-bar) and ends with the same
code, the stop character is a 1-1-1 (bar-space-bar). In Figure 20.8, starting from the left is the start code;
a one-unit-wide black bar followed by a one-unit-wide white space followed by a one-unit-wide black bar
(bar-space-bar). Then will all the 12 numbers follow by a combinations of black bars and whithe spaces,
and the bar code will end with the stop character. Note that in the middle there is a standard 1-1-1-1-1
(space-bar-space-bar-space), which is important because it means the numbers on the right are optically
inverted!
The last number is the check digit, used as a checksum of the bar code. The check algorithme is
(using the example from Figure 20.8):
1. Add together the value of all of the digits in odd positions (digits 1, 3, 5, 7, 9 and 11); 6 +9 +8 +
0 + 0 + 9 = 32,
2. Multiply that number by 3; 32 + 3 = 96,
3. Add together the value of all of the digits in even positions (digits 2, 4, 6, 8 and 10); 3+3+2+0+3 =
11,
4. Add this sum to the value in step 2; 96 + 11 = 107,
5. The check digit is the number that must be added to the number in step 4 to be a multiple of 10;
107 + 3 = 110.
UPC numbers are assigned to specic products and manufacturers by the Uniform Code Council
(UCC).
EAN
European Article Numbering (EAN) system (also called JAN in Japan) is a European version of UPC.
It uses the same size requirements and a similar encoding scheme as for UPC codes.
EAN-8 encodes 8 numeric digits consisting of two country code digits, ve data digits and one check
digit. B-Coder will accept up to 7 numeric digits for EAN-8. B-Coder will automatically calculate the
check digit for you. If you enter less than 7 digits or if you enter any digits other than 0 to 9, B-Coder
will display a warning message. If the option "Enable Invalid Message Warnings" in the Preferences
menu is not selected and you do not enter 7 digits, B-Coder will left pad short messages with zeros and
truncate longer messages so that the total length is 7.
EAN-13 is the European version of UPC-A. The dierence between EAN-13 and UPC-A is that
EAN-13 encodes a 13th digit into the parity pattern of the left six digits of a UPC-A symbol. This 13th
digit, combined with the 12th digit, usually represent a country code.
Summary
Symbology Data Capacity
UPC-A 12 numeric digits - 11 user specied and 1 check digit.
UPC-E 7 numeric digits - 6 user specied and 1 check digit.
EAN-8 8 numeric digits - 7 user specied and 1 check digit.
EAN-13 13 numeric digits - 12 user specied and 1 check digit.
Code 39 Variable length alphanumeric data - the practical upper limit is dependent on the scanner and is typically between 20 and 40 characters.
Code 93 Code 128 is more ecient at encoding data than Code 39 or Code 93. Code 128 is the best choice for most general bar code applications.
Code 128 Code 39 and Code 128 are both very widely used while Code 93 is rarely used.
Barcode system
The bar code system must consist of a bar code reader and a computer as shown in Figure 20.9. The
bar code application on the computer convert the bar code to some valid information in the computer.
The advantage is that the bar code on the items can stay the same, while the data can be updated in
the computer only.
CHAPTER 20. COMMUNICATION 150
Figure 20.9: The usage of a bar code reader in a computer system. The bar code is converted to valid
information by a computer application.
20.2.2 RFID
Radio-frequency identication (RFID) is an automatic identication method, relying on storing and
remotely retrieving data using devices called RFID tags or transponders.
An RFID tag is an object that can be applied to or incorporated into a product, animal, or person
for the purpose of identication using radio waves. Some tags can be read from several meters away
and beyond the line of sight of the reader. There are two types of RFID tags: active RFID tags, which
contain a battery (up to six years with active sleep mode), and passive RFID tags, which have no battery.
Passive RFID tags uses the electrical current induced in the antenna by the incoming radio frequency
signal to provide power for the CMOS integrated circuit in the tag to power up and transmit a response.
Most RFID tags contain at least two parts:
1. an integrated circuit for storing and processing information, modulating and demodulating a (RF)
signal, and other specialized functions,
2. an antenna for receiving and transmitting the signal. Chipless RFID allows for discrete identica-
tion of tags without an integrated circuit, thereby allowing tags to be printed directly onto assets
at a lower cost than traditional tags.
Figure 20.10 shows a block diagram of a RFID transponder chip. The only external component is
the coil, or antenna, to the Coil1 and Coil2 pins. The voltage induced in the coil is fed to a full wave
rectier block and used as the voltage generator for the chip. The voltage is available between the VDD
and VSS. The clock frequency is generated from the clock extractor block. This clock frequency is used
by the sequencer to clock out the data in the memory array block. The data, 64 bits, are shifted out
as serial data, into the data encoder block. In the data encoder is the data processed and fed to the
data modulator block according to a protocol. The data modulator drives the antenna coil and produces
an amplitude modulated HF signal with the data in the side bands. Dierent protocols are used in
RFID systems coding the data, and both the transmitter and the receiver must agree on using the same
protocol.
The usage of a RFID reader in a computer system is shown in Figure 20.11. The RFID code is
converted by a computer application to some sort of valid information. The advantage of this solution is
that any changes is updated in the computer application, not at the tag code. One example is the price
of the item, when the price is changed only the conversion table has to be updated not the RFID tags
on the items.
This is the same solution as with a bar code system as shown in Figure 20.9. An advantage of the
RFID system is the distance between the items and the RFID reader, this distance can be longer and
fewer obstacles can stop the communication between the items and the reader.
20.2.3 RFID or Bar Codes
Both the RFID and the Bar Codes can be used as forms of automatid data collection. RFID uses a
tag applied to a product in order to identify and track it via radio waves, using an active sensor (an
integrated circuit and an antenna). Barcode is an optical representation of data represented by the width
and spacing of parallel lines, using a passive sensor (a printed label).
CHAPTER 20. COMMUNICATION 151
Figure 20.10: Block diagram of the EM4100 transponder chip (EM Microelectronic: jun-09).
Figure 20.11: The usage of a RFID reader in a computer system, the RFID code is converted to a valid
information by a computer application.
CHAPTER 20. COMMUNICATION 152
Figure 20.12: The globe with a set of satellites, some visible and some invisible from a specic location
on the globe (www.wikipedia.org 2006).
Advantages of RFID; RFID technology is more comprehensive than barcode technology, allowing
tags to be read from a greater distance. In addition, RFID tags can be read much faster than
barcodes because barcodes require a direct line of sight, whereas about 40 RFID tags can be read
at once.
Advantages of Barcodes; The barcodes are cheaper than RFID technology. Barcode tags are also
much lighter and smaller than RFID tags, making them easier to use.
20.2.4 GPS
The global positioning system (GPS) is a satellite navigation system with up to 32 (24) satellites orbiting
the earth in 12 hours orbits (paths). The satellites are distributed such that, on average, there are 12
satellites visible in each hemisphere (horizon). The satellites are time synchronized using an onboard
atomic clock, and continuously transmitting the time and other information. The transmission is using a
common carrier, but each satellite has its own pseudo random number sequence. An another encrypted
carrier is available for military purposes using a super secret GPS password. The non encrypted carrier
can be used by any GPS receiver. The earth with the satellites is shown in Figure 20.12 with a set of
visible and non-visible satellites. The GPS receiver is a satellite receiver which listens for the signals and
measures the time of the arrival, comparing it to the GPS time when the data was sent. This information
provides a pseudo-range to each satellite that is received, and the range is used to compute the position.
Four satellite signals are needed to get a three-dimensional position.
GPS receivers typically listen for all 12 satellites that should be in the hemisphere where the receiver
is located and uses the strongest signals to compute the position. The more satellites it uses, the better
will the accuracy be.
In order to know which satellite is where, will all satellites from time to time transmit a database
containing the orbital data for all the satellites. The GPS receiver should store this database in nonvolatile
memory to preserve this information between power cycles.
At a cold start (power up) must the GPS receiver cycles through all possible satellites codes until
it receives a satellite signal with sucient signal to noise ratio to download the orbital information and
current time. The cold starts can take up to 15 minutes, depending on the GPS receivers. A warn start,
for example a watchdog reset, or moved a large distance, the startup time is often less than 2 minutes.
The protocol used for many GPS receivers is the NMEA-0183 (National Marine Electronics Asso-
ciation). This protocol uses a simple ASCII, serial communication protocol that denes how data is
transmitted in a sentence from one transmitter to one receiver at a time.
20.3 Wireless sensor network
Wireless sensor networks are networks of small, battery-powered, memory-constraint devices named
sensor nodes, which have the capability of wireless communication over a restricted area. A sensor node
is a device in a wireless sensor network that is capable of performing some processing, gathering sensory
CHAPTER 20. COMMUNICATION 153
Figure 20.13: A control system based on wired sensor network (Skavhaug & Pettersen 2007).
Figure 20.14: A system controlling a plant or a process using a closed control loop.
information and communicating with other connected nodes in the network. A node is a connection
point for data transmission.
The usage of wireless networks is being more and more popular, including wireless sensor networks.
It should be possible to achieve up to 10% reduction of construction cost by utilizing wireless instrumen-
tation in new plants and facilities
1
. A control system using cabling for the sensors and the actuators is
shown in Figure 20.13.
The system needs to receive the sensor data at the right time and need to know if any problem with
the sensor data like errors, no contact, and so on. The system also needs to control the actuators at
the right time and to receive the right feedback. The control loop is closed
2
as long as the connection
between the control system, the actuators and the sensors is OK.
A system controlling a plant or a process using measurement and control in a closed control loop is
shown in Figure 20.14.
A control system using wireless communication for the sensors and the actuators is shown in Figure
20.15.
1
Dag Sjong, StatoilHydro, at Servomtet 2007.
2
A closed control loop is using feedback from the plant/process for the control algorithm.
Figure 20.15: A control system based on wireless sensor network (Skavhaug & Pettersen 2007).
CHAPTER 20. COMMUNICATION 154
Figure 20.16: The communication and sensing functions of a RFD, a FFD, and a coordinator in a sensor
network.
The control loop will now be based on wireless communication, the connection should still be OK,
but what about the safety in the system? Is it better to use cables? Or should the system be designed to
have the same safety aspect with wireless communication? The system can for example to be designed
with safety states meaning that dierent parts will enter a safety condition if the communication fails.
A network with active sensor nodes often consists of 3 dierent types of sensor devices:
1. Reduced Functional Devices (RFD), often sensing devices only sending information,
2. Fully Functional Devices (FFD), sensing device both sending and receiving (routing) information,
3. Coordinator, often only one device i the network, coordinating the network and the gateway to
other systems.
An overview of the sensing and networking functionallity of such devices is shown in Figure 20.16.
Some specication and/or requirements for a wireless sensor network:
1. Sensors spread across a geographical area using no cable at all!
2. Each sensor node has: wireless communication capability, some level of intelligence for signal
processing, and networking of the data.
3. Sensor Node Requirements: Small, Battery powered, Radio, tens of meters, Embedded processor,
and storage.
4. Network Requirements: Large number of sensors (1.000 to 10.000 nodes), Low energy use, Network
self-organization, Querying ability, Dierent topologies, but mesh networks used most (Regularly
distributed networks that allow transmission only to the nearest nodes).
5. Low power usage; Communication is the most energy-consuming operation, Transmitting one bit
cost the same energy as about 1000 instructions, Process data within the network wherever
possible!
6. Protocols;
(a) International Standards
i. ZigBee
ii. Bluetooth
iii. WirelessHART
iv. Smart Sensors (IEEE 1451)
(b) Proprietary Solutions
i. Dust Networks
ii. Sensicast
(c) Smart Home Networks
i. X-10 protocol
ii. Consumer Electronic Bus (CEBus)
iii. LonWorks
CHAPTER 20. COMMUNICATION 155
Figure 20.17: The dierent types of network nodes in a ZigBee network.
20.3.1 ZigBee
ZigBee is the name of a specication for a suite of high level communication protocols using small, low-
power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs)
(www.wikipedia.org 2006).
The raw data rate is 250 kbit/s per channel in the 2.4 GHz band, 40 kbit/s per channel in the 915
MHz band, and 20 kbit/s in the 868 MHz band. Transmission range is between 10 and 75 meters (33~246
feet), although it is heavily dependent on the particular environment. The maximum output power of
the radios is generally 0 dBm (1 mW).
The basic channel access mode specied by IEEE 802.15.4-2003 is carrier sense, multiple access/collision
detection (CSMA/CD).
ZigBee protocols are intended for use in embedded applications requiring low data rates and low
power consumption. ZigBees current focus is to dene a general-purpose, inexpensive, self-organizing,
mesh network that can be used for industrial control, embedded sensing, medical data collection, smoke
and intruder warning, building automation, home automation, domestics, etc. The resulting network will
use very small amounts of power so individual devices might run for a year or two using the originally
installed battery.
The software is designed to be easy to develop on small, cheap microprocessors. The radio design
used by ZigBee has been carefully optimized for low cost in large scale production. It has few analog
stages and uses digital circuits wherever possible.
All the routers are mains-powered devices (lamps, heat pump, lighting xtures, smoke alarms) and
the "end" devices are battery-powered (switches, thermostats, motion detectors).
Protocol Type Range Data rate
ZigBee 2.4 GHz 10m-75m 250 kbit/s
(IEEE 802.15.4) 915 MHz 10m-75m 40 kbit/s
868 MHz 10m-75m 20 kbit/s
The number of ZigBee devices in a network depends on the topology, but can be up to 65535 nodes.
The ZigBee standard includes three dierent ZigBee devices (nodes):
ZigBee end device (ZED): A device containing enough functionality to communicate with a
parent node, the device can not relay data from other devices.
ZigBee Router (ZR): This device will function as a router for passing data from other devices
in addition to be a ZED.
ZigBee coordinator (ZC): The gateway to other networks and forms the root of the network
tree. A ZigBee network will only have one ZC. A ZED and/or a ZR can also be part of the ZC.
An example of a ZigBee network is shown in Figure 20.17.
CHAPTER 20. COMMUNICATION 156
20.3.2 Bluetooth
Bluetooth is an industrial specication for wireless personal area networks (WPANs). Bluetooth provides
a way to connect and exchange information between devices such as mobile phones, laptops, PCs, printers,
digital cameras and video game consoles via a secure, globally unlicensed short-range radio frequency
(www.wikipedia.org 2006).
Class Power Range
Class #1 100 mW - 100 m
Class #2 2.5 mW - 10 m
Class #3 1 mW - 1 m
A Bluetooth device playing the role of the master can communicate with up to 7 devices playing
the role of the slave. This network group of up to 8 devices (1 master and 7 slaves) is called a
piconet. A piconet is an ad-hoc computer network of devices using Bluetooth technology protocols to
allow one master device to interconnect with up to seven active slave devices. Up to 255 further slave
devices can be inactive, or parked, which the master device can bring into active status at any time.
At any given time, data can be transferred between the master and one slave; but the master switches
rapidly from slave to slave in a round-robin fashion. (Simultaneous transmission from the master to
multiple slaves is possible, but not used much in practice). Either device may switch the master/slave
role at any time.
Bluetooth specication allows connecting 2 or more piconets together to form a scatternet, with some
devices acting as a bridge by simultaneously playing the master role in one piconet and the slave role in
another piconet.
The Bluetooth protocol operates in the license-free ISM band at 2.45 GHz. In order to avoid inter-
fering with other protocols which use the 2.45 GHz band, the Bluetooth protocol divides the band into
79 channels (each 1 MHz wide) and changes channels up to 1600 times per second. Implementations
with versions 1.1 and 1.2 reach speeds of 723.1 kbit/s. Version 2.0 implementations feature Bluetooth
Enhanced Data Rate (EDR), and thus reach 2.1 Mbit/s. Technically version 2.0 devices have a higher
power consumption, but the three times faster rate reduces the transmission times, eectively reducing
consumption to half that of 1.x devices (assuming equal trac load).
Version Data rate Note
1.1 723.1 kbit/s
1.2 723.1 kbit/s
2.0 2.1 Mbit/s
20.3.3 Wireless HART
WirelessHART is a wireless mesh network communications protocol for process automation applications.
It adds wireless capabilities to the HART Protocol while maintaining compatibility with existing HART
devices, commands, and tools. Gateways enable communication between the wireless HART devices
and host applications in the existing plant communications network. The network uses IEEE 802.15.4
compatible radios operating in the 2.4GHz radio band. The radios are using channel hopping for com-
munication security and reliability, as well as TDMA synchronized, latency-controlled communications
between devices on the network.
Wireless HART properties
Radio standard IEEE 802.15.4 2006
Frequency band 2.4 GHz
Channel hopping Yes, packet basis
Distance maximum 250 m
Topologies Mesh and Star
CHAPTER 20. COMMUNICATION 157
Figure 20.18: An overview of a wireleass HART network with a set of wireless nodes and two gateways
(www.hartcomm.org: nov-09).
20.3.4 Wireless Cooperation Team
The Wireless Cooperation Team (WCT) is a collaboration by the Fieldbus Foundation, Hart Communi-
cation Foundation, and Probus. This team tries to agree on a wireless technology for the manufacturing
and process industries worldwide. The team is developing an interface specication and compliance guide-
lines to integrate a universally accepted wireless solution into the Hart, Foundation eldbus, Probus
and Pronet communications networks. The team was founded in 2008.
20.3.5 Comparison of wireless standards
Figure 20.19 shows a comparison of some of the key properties rearding a wireless standard.
20.4 Distributed Systems
Distributed computing deals with hardware and software systems containing more than one processing
element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly
controlled regime.
In distributed computing a program is split up into parts that run simultaneously on multiple comput-
ers communicating over a network. Distributed computing is a form of parallel computing, but parallel
computing is most commonly used to describe program parts running simultaneously on multiple proces-
sors in the same computer. Both types of processing require dividing a program into parts that can run
simultaneously, but distributed programs often must deal with heterogeneous environments, network
links of varying latencies, and unpredictable failures in the network or the computers. An example of a
distributed system is shown in Figure 19.3.
A common architecture is a server client architecture where the server is the owner of a resource and
the clients want information from the server.
CHAPTER 20. COMMUNICATION 158
Figure 20.19: Comparison of some of the key properties of the WiFi, the Bluetooth, and the ZigBee
standards.
Chapter 21
Discrete Sampling
Digital systems record signals at discrete times and record no information about the signal between these
times, see Figure 19.6. So sampling means to take snapshots of analog signals at discrete times. The time
interval between the samples is kept constant in most application and is known as the sampling rate.
The designer or user of such digital systems, like DAQ system, must be aware of problems of recording
at discrete times and take the right actions to get the correct information from the analog signals. The
conversion from an analog value to a binary number is called quantization. The binary number consists
of a number of bits used for the conversion, normally in the range of 8 to 24 bits.
Figure 21.1 shows how a DAQ system will separate the analog and digital section of a measurement
system, and the digital section will only contain digital representation of the analog signal at discrete
time intervals. This is shown in the right part of the Figure.
21.1 Sampling-rate theorem
When a computer system is used for recording analog signals at discrete times, will the measurement
read a value only at specic times and no measurement between these specic times. How should the
user of the system know that he is getting all the necessary information from the analog signal only at
these specic times?
The measurement at these specic times is known as the sampling rate and it is important to select the
right sampling rate for the measurement. Incorrect selection of the sampling rate can lead to misleading
results (Wheeler & Ganji 2004).
Lets start with a sinus of 10 Hz shown in Figure 21.2 showing a sine wave that we want to sample.
The frequency of the sine wave in Figure 21.2 is 10 Hz and we will try to sample with 5, 11, 18, and
20.1 samples per second. A 10 Hz sine wave has a periodic time of 100 ms
_
T =
1
}
_
and using 5 samples
per second will give exact the same value of the sine wave in every second period of the wave. Using 5
samples per second will then just give a constant value as shown in Figure 21.3. Using 10 samples per
Figure 21.1: The DAQ system consists of an analog section and a digital section. The sensing part
will always be analog, and somewhere in the system will the DAQ convert an analog value to a digital
representation. This is shown in the right part of the gure.
159
CHAPTER 21. DISCRETE SAMPLING 160
Figure 21.2: A 10 Hz sine wave to be sampled (Wheeler & Ganji 2004).
Figure 21.3: A 10 Hz sine wave sampled with 5 or 10 samples per second (Wheeler & Ganji 2004).
second will give the same value but at every period of the sine wave.
This shows that the sampling frequency must be at least higher than the frequency of the signal to
be sampled, but how much higher? Let us check with 11 samples per second, one sample per second
higher than the frequency of the sine wave. Figure 21.4 shows the results of the 11 samples per second
of the 10 Hz sine wave signal. The result is a sine wave, but compare the time scale with Figure 21.2,
the result is a sine wave with another frequency.
Rising the number of samples per second to 18 samples per second gives the result in Figure 21.5.
The result is much better than in Figure 21.4 but still the sampling is not good enough.
Rising the samples per second to 20.1 as shown in Figure 21.6 shows a signal with the same frequency
information as in Figure 21.2. It turns out that for any sampling rate greater than twice the highest
frequency )
n
will the frequency )
n
will apparent in the sampled information. This is the sampling-rate
theorem:
)
s
_ 2 )
n
stating that the sampling frequency )
s
should be at least twice the highest frequency component of the
original signal in order to reconstruct the original waveform (frequency) correctly (Wheeler & Ganji 2004).
21.2 A/D conversion
A/D systems is called analog input subsystems as they convert analog signals to digtal signals for a
computer. These analog input subsystems typically consists of eight or sixteen input channels, but only
the number of channels used will inuence on the sampling rate for the system. Sampling will always be
CHAPTER 21. DISCRETE SAMPLING 161
Figure 21.4: The 10 Hz sine wave signal sampled with 11 samples per second (Wheeler & Ganji 2004).
Figure 21.5: The 10 Hz sine wave signal sampled with 18 samples per second (Wheeler & Ganji 2004).
Figure 21.6: Sampling of the sine wave at a sampling frequency of 2 ).
CHAPTER 21. DISCRETE SAMPLING 162
Figure 21.7: The sampling of eight consecutive analog input channels, the sampling period, and the
channel skew.
a time dependent operation and the analog signal should be kept constant during the conversion period.
A sample and hold (S/H) is normally used, consisting of a signal buer, an electronic switch, and a
capacitor.
The operation of the S/H section will be:
1. The electronic switch will connect the capacitor to the input signal through the signal buer, the
capacitor will be charged to the input voltage,
2. The electronic switch will disconnect the input signal, and the capacitor will have a constant
voltage while the A/D converter is converting the analog voltage to a digital representation of the
input signal,
3. The S/H section will be common for analog input systems with several input channels,
4. The charging is repeated for every conversion cycle of the A/D converter.
As the DAQ system contains an analog multiplexer to be able to sample signals from several channels.
The A/D converter is often an expensive component, while the analog multiplexer is a much cheaper
component. Using several channels for an A/D converter, the highest frequency of each channels will be:
Maximum sampling rate per channel =
Maximum sampling rate for AD converter
Number of channels used
Using a multiplexer means that every input channel, or analog input signal, will be scanned after
each other. This means that the sampling period will be the time from one specic input channel is
converted until the same input channel is converted again. This is the reason why the maximum sampling
rate will depend on the number of analog channels are used. This means that the designer of the DAQ
system should only scan the analog inputs that are used, not all available input channels. The maximum
sampling rate for the DAQ system will be when using only one input channel. When several channels
are used, those channels cannot be sampled simultaneous and a time gap will exists between consecutive
samples channels. This time gap is called channel skew. The timing for sampling the analog channels,
the sampling period, and the channel skew are shown in Figure 21.7.
An analog input subsystem will, as a minimum, consists of a multiplexer for several analog input
signals, a sample and hold section, and the analog to digital (A/D) converter. These devices are inter-
connected as a DAQ system in Figure 21.8.
21.3 Simultaneous Sample and Hold
In some system the channel skew can not be tolerated, all the channels must be read at exactly the same
time. Then Simultaneous Sample and Hold are used to sample all input signals at the same time, and
holds the value until the A/D converter has converted all the values. Either a set of electronic switches
and capacitors can be used or separate A/D converters for each input channel. Figure 21.9 shows the
usage of Simultaneous Sample and Hold logic.
CHAPTER 21. DISCRETE SAMPLING 163
Figure 21.8: A DAQ system consisting of an analog multiplexer, a sample and hold section, and an
analog to digital converter.
Figure 21.9: The sampling of eight consecutive analog input channels with Simultaneous Sample and
Hold section.
21.4 Aliasing
If the sampling frequency is too low, will the discrete time values not be a correct conversion of the
continuous time values as shown in Figure 21.10.
This is known as aliasing. Figure 21.4 is also showing aliasing as the sampling frequency is too low
to the correct information. One way of avoiding aliasing can be by adding a hardware lter before the
A/D converter for removing frequency components above the sampling frequency, see Figure 21.11.
The LP lter can be located before or after the mux, normally after the mux. If special ltering
of the dierent input signals is wanted, use a LP lter before the mux, but remember that every input
signal needs a lter. Using a lter after the mux, only one LP lter is needed.
The design of RC lter will be to dene the cut o frequency for the lter, this frequency will be
dened from the frequency components in the input signals and the sampling frequency of the ADC.
The cut o frequency will be:
)
cu|o}}
=
1
2 t
=
1
2 1 C
21.5 Oversampling
In some cases, it is desirable to have a sampling frequency considerably more than twice the desired
system bandwidth so that a digital lter can be used in exchange for a weaker analog anti-aliasing lter.
This process is known as oversampling and the lter will now be part of the signal condition section in
Figure 21.11. Dierent types of software lters exist and is used for data cleaning purposes. An eective
data-cleaning lter should satises two important properties (Pearson 2005):
1. outliers
1
should be replaced with data values that are more consistent with the local variation of
the nominal sequence,
1
outlier: an entry in a dataset that is anomalous to the other entries in the dataset, with respect to the behavior
(Pearson 2005)
CHAPTER 21. DISCRETE SAMPLING 164
Figure 21.10: The aliases problem if the sampling frequency is too low. The upper part of the Figure
shows a sampling frequency high enough, while the lower part shows a too low sampling frequency ending
up with a strain line signal as the digital representation of the signal.
Figure 21.11: A lowpass lter used for avoiding aliasis in the measurement system.
CHAPTER 21. DISCRETE SAMPLING 165
Figure 21.12: A folding diagram used for estimation of the alias frequency (Wheeler & Ganji 2004).
2. the lter should cause no or little change in the nominal data sequence.
Digital lter techniques include exponential, box-car averaging, moving average, and weighted moving
average
2
ltering, Fourier transformation, correlation analysis, and others (Meier & Znd 2000).
21.6 Folding diagram
It is possible to estimate the lowest alias frequency using a folding diagram. The folding diagram can be
used to predict the alias frequency based on the signal frequency and the sampling rate, like:
1. Compute the folding frequency: )

=
}
s
2
where )
s
is the sampling frequency,
2. Compute the folding diagram index:
}
r
}
1
where )
n
is the sampled signal frequency,
3. Find the index
}
r
}
1
in the folding diagram, draw a vertical line to the lowest line and read the
folding diagram index on the lowest line (base line),
4. The alias frequency will be: folding diagram index )

.
The folding diagram is shown in Figure 21.12.
Example 16 A system has a sampling frequency of 100 Hz and a maximum frequency of interest of
80 Hz. As the sampling frequency is not twice the frequency of interest, the will be an aliases frequency,
but where?
1. The folding frequency: )

=
100
2
= 50
2. Folding diagram index: id =
80
50
= 1.6
3. Index in the folding diagram = 0.4
4. The lowest aliases frequency will be: )
.
= 0.4 50 = 20 Hz
Your measurement system will read a new frequency signal because the sampling frequency is wrong.
What is the correct sampling frequency?
2
also called Savitzky-Golay lter or Hampel lter.
CHAPTER 21. DISCRETE SAMPLING 166
Figure 21.13: A saw tooth wave form.
Example 17 A system has a sampling frequency of 100 Hz and a maximum frequency of interest of
100 Hz. As the sampling frequency is not twice the frequency of interest, the will be an aliases frequency,
but where?
1. The folding frequency: )

=
100
2
= 50
2. Folding diagram index: id =
100
50
= 2
3. Index in the folding diagram = 0.0
4. The lowest aliases frequency will be: )
.
= 0.0 50 = 0 Hz
Note that the signal is not zero, but a signal with zero frequency, a steady signal.
21.7 Spectral analysis of Time varying signals
A sensor signal often consists of a sum of signals with dierent frequencies. An example can be the
sawtooth waveform being a simple waveform, but consist of a set of frequencies. See Figure 21.13 for a
sawtooth waveform.
The method used to determine the frequency components is known as the Fourier series analysis.
The lowest frequency in a periodic wave is )
0
and is called the fundamental or rst harmonic frequency.
The rst harmonic frequency
3
has a period T
0
and angular frequency .
0
. It can be shown that any
periodic function ) (t) can be represented by the sum of a constant and a series of sine and cosine waves
(Wheeler & Ganji 2004). The representation is:
) (t) = a
0
+a
1
cos .
0
t +a
2
cos 2.
0
t +.. +a
n
cos :.
0
t
+/
1
sin.
0
t +/
2
sin2.
0
t +.. +a
n
sin:.
0
t
The constant a
0
is the time average of the function over the period T as:
a
0
=
1
T
_
T
0
) (t) dt
and the constants a
n
will be:
a
n
=
2
T
_
T
0
) (t) cos :.
0
tdt
and the constants /
n
will be:
/
n
=
2
T
_
T
0
) (t) sin:.
0
tdt
By using the Fourier series analysis the number of frequency components can be dened in any signal
and the number of frequency components can be decided for the measurement system.
3
The angular frequency ! = 2f where f =
1
T
CHAPTER 21. DISCRETE SAMPLING 167
21.8 Spectral Analysis using the Fourier transform
The Fourier transform is a generalization of Fourier series and can apply to any practical function by
using Fast Fourier Transform (FFT). FFT starts with the Fourier series, but is using complex exponential
form (Wheeler & Ganji 2004). The sine and cosine functions are represented as:
cos r =
c
r
+c
r
2
sinr =
c
r
c
r
2,
where , =
_
1. The signal can then be stated as:
) (t) =
1

n1
c
n
c
n.
0
|
where:
c
n
=
1
T
_ J
2

J
2
) (t) c
.
0
|
dt
If a longer value of T is selected will the lowest frequency be reduced. By making T innity will the
frequency becomes a continuous function and lead to the concept of Fourier transform. The Fourier
transform of a function ) (t) is dened as:
1 (.) =
_
1
1
) (t) c
.|
dt
1 (.) is a continuous complex-valued function. Once a Fourier transform has been terminated, the
original function ) (t) can be recovered from the inverse Fourier transform:
) (t) =
1
2
_
1
1
) (.) c
.|
d.
Using a measurement system and an A/D converter, the data are measured only at discrete times. The
Discrete Fourier Transform (DFT) can be used for analysis of these data taken at discrete times over a
nite time interval. DFT is dened as:
1 (/)) =
1

n=0
) (:t) c
(2t|})(n|)
/ = 0, 1, 2, ..., 1
where N is the number of samples taken during a time period T. The increment of ), ), is equal to
1
T
and the increment time is equal to
T

.
21.9 FFT diagram
Using the function
) (t) = 2 sin210t + sin215t
showing in Figure 21.14 The signal is composed of two sine waves with frequency of 10 Hz and 15 Hz.Using
128 samples and one second, the FFT is shown in Figure 21.15.
The FFT diagram shows the frequency components of the signal, in this signal the frequency com-
ponents are 10 Hz and 15 Hz. The sampling frequency must be at least 30 Hz for this signal.
These analysis will not be part of this course, but will be part of a signal analysis course. The FFT
will a software tool that can be used for analysing the signals and nding the maximum frequency signal
of the sensor signals.
CHAPTER 21. DISCRETE SAMPLING 168
Figure 21.14: The function ) (t) = 2 sin210t + sin215 (Wheeler & Ganji 2004).
Figure 21.15: The FFT analysis of the function ) (t) = 2 sin210t + sin215t (Wheeler & Ganji 2004).
CHAPTER 21. DISCRETE SAMPLING 169
21.10 Selecting the sampling rate and ltering
A signal will consists of a set of sine and cosine waves with dierent frequencies, how to sample and lter
these frequency components? Remember to avoid aliasing, the sampling frequency must be greater than
the maximum frequency of the signal, not the frequency of interest.
1. Find the maximum frequency of the signal by analyzing the signal and/or the system. If any doubt,
use FFT or DFT if the signal is available,
2. Dene the maximum frequency of interest,
3. Check the sampling rate of the measurement system,
(a) remember that the sampling rate will be equal for all input channels of a measurement system,
(b) a low-pass lter is necessary is the sampling rate is too high, often called a anti aliasing lter,
(c) remember that a lter will not remove other frequencies, only reduce them!,
4. Consider a lter, either a hardware lter or a software lter,
(a) to avoid aliasing, use a hardware lter,
(b) if oversampling is possible, use a software lter,
5. Remember that many sensing devices eliminate unwanted frequency at the sensing stage,
6. Try to avoid system noise at all stages in the measurement system.
Some guidelines for making good measurements:
1. Miximize the precision and accuracy,
2. Minimize the noise,
3. Match the A/D converter range to the sensor range.
21.11 Dynamic range of the lter and A/D converter
To calculate the signal attenuation the dynamic range of the A/D converter is important. The dynamic
range is:
G
JnonIc_:ont
= 20 log
10
_
2

_
d1
where is the number of bits for the A/D converter. For monopolar 8 and 12 bits converters, the
dynamic range will be:
Dynamic Range
Bits Monopolar Bipolar
8 48 42
12 72 66
For bipolar converters N is reduced by one since one bit is used as a sign bit. This way we need to
dene the corner frequency of the lter so the attenuation rate is removing the unwanted frequencies using
both the lter and the A/D converter. Normally a lter has a corner frequency )
c
and an attenuation
rate
_
J1
oc|out
_
. The number of octaves
4
will then be:

oc|
=
dj:a:ic_ra:qc
)i|tcr_attc:natio:_ratc
and can be used to dene the corner frequency of the lter, the sampling rate, and the number of bits
of the A/D converter. The maximum frequency )
n
can then be evaluated from
4
octave: a doubling of the frequency.
CHAPTER 21. DISCRETE SAMPLING 170
Figure 21.16: A time interleaved conversion of an analog signal using two A/D converters.
)
n
= )
c
2

cct
and the sampling rate will be the twice of the maximum frequency )
n
. If the sampling rate is too high,
a higher lter attenuation must be consider.
The attenuation of a Butterworth lter will be:
Butterworth attenuation
rst order 6
J1
oc|out
eight order 48
J1
oc|out
but remember that a higher order of the lter also gives a larger phase shift in the passband.
21.12 Time interleaved A/D converters
The multiplexing of the analog input signals and the sampling frequency are limitation for the A/D
conversion. Several options are available to solve these limitations like:
1. use one A/D converter for each input; a more expensive solution for both physical space and cost,
2. use a faster A/D converter; a more expensive solution for cost,
3. use time time interleaved A/D converters; a more expensive solution for physical space.
Figure 21.16 shows a time interleaved solution with 2 A/D converters. This solution gives that the
maximum frequency can be twice of a solution using only one A/D converter.
21.13 Nyquist Frequency
The Nyquist frequency is half the sampling frequency and is sometimes called the folding frequency, or
the cut-o frequency of a sampling system (www.wikipedia.org 2006).
The sampling theorem shows that aliasing can be avoided if the Nyquist frequency is greater than the
bandwidth, or maximum component frequency, of the signal being sampled (www.wikipedia.org 2006).
Chapter 22
Logging
Logging of data will be valid for any check of the experiment, any check of the data, or any check of
the trend of the data. Logging can be build into the measurement system, the monitoring and control
system (usual), or a an external system.
22.1 Sensor data
The logging data should be:
1. the sensor value,
2. the converted value (unit value),
3. the time,
4. the status of the sensor value.
Figure 22.1 shows an example of logging where the logging is biuld into the system, and data are
logged on le.
22.2 Historical data
Historical data is sensor data logged at a time dierent from current time. Saving sensor data at specic
time intervals will give historical data. Historical data must be saved on specic les or in a database
for later use. Saving the data on les will keep the historical data if the monitoring and/or control
application is stopped, or in case of a system failure or breakdown.
22.3 Trend curves
A utility (or tool) to display historical data. All data, or data between to specic times (time interval).
Figure 22.2 shows the trening, ltering, and prediction of a signal, where the trening is the signals in the
past. The trening can be a curve between each value or a smoothing curce as shown in the Figure.
Figure 22.1: The structure of a internal logging module.
171
CHAPTER 22. LOGGING 172
Figure 22.2: The trending, ltering and prediction of a signal or value.
Chapter 23
Statistical analysis of Experimental
data
Reading signals from sensor devices usually introduce a certain amount of randomness which can aect
the conclusion drawn of the results. This chapter will deal with some important statistical methods than
can be used in these conclusions.
23.1 Introduction
Randomness will always be part of signals from sensor devices, even if the value to measured is xed.
This randomness is due to uncontrollable variables aecting the measurand and lack of precision in the
measurement system. The randomness consists of two types:
1. systematic errors; repeatable errors that can be minimized by calibration of the measurement
system,
2. random errors; errors for statistical analysis, both in planning the experiments and for the results
of the experiments.
This means that some analysis should be performed on the values or set of values from the DAQ
system. Figure 23.1 shows a monitoring system getting digital values. These values should be validated
due to the randomness of signal from sensor devices.
23.2 General concepts and denitions
23.2.1 Denitions
1. Population; the entire collection of measurements,
2. Sample; a representative subset of the population used for the experiments,
3. Sample space; the set of all possible outcomes of an experiment,
Figure 23.1: Some analysis should be performed on the digital values from the DAQ system to validate
the data.
173
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 174
4. Random variable; a numerically value from every experiment, continuous or discrete,
5. Distribution function; a graphical or mathematical relationship used to represent the values of the
random variable,
6. Parameter; a numerical attribute of the entire population (for example the average of the random
variable),
7. Event; the outcome of the a random experiment,
8. Statistic; a numerical attribute of the sample (for example the average of the sample),
9. Probability; the change of occurrence of an event in the experiment,
10. Condence interval: an estimate of a population parameter,
11. Condence level: how likely the condence interval is to contain the population parameter,
12. Condence coecient: same as condence level.
23.2.2 Measure of central tendency
The most used parameter is the mean:
r =
r
1
+r
2
+.. +r
n
:
=
n

I=1
r
I
:
where r
I
is the value of the sample data and : is the number of measurements. In a population with a
nite number of elements, , the mean is often denoted by the symbol j.
Two other properties are the median and the mode.
1. Median; if the measured values are arranged in ascending or descending order, the median is the
value in the center of the set.
2. Mode; the most frequently occurring value.
23.2.3 Measures of dispersion
Dispersion is the spread or variability of the data. The deviation of each measurement is dened as
d
I
= r
I
r
The mean deviation is dened as

d =
n

I=1
[d
I
[
:
For a population with a nite number of elements, the population standard deviation is dened as
(sigma):
o =

I=1
(r
I
j)
2

The sample standard deviation is dened as:


o =

I=1
(r
I
r)
2
( 1)
The sample standard deviation is used when the data of a sample are used to estimate the population
standard deviation. If the number of measurement is more then 30, the population standard deviation
is an approximation of the sample standard deviation:
: 30 == o - o
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 175
Figure 23.2: The histogram of the temperature values.
The variance is dened as
aria:cc =
_
o
2
for the population
o
2
for a sample
_
23.3 Historgram
Example 18 Measure the temperature of this room, every 5 minutes. The temperature for one hour can
then be:
1 = 22.3 2 = 21.9 3 = 22.8 4 = 21.9 5 = 22.3
6 = 21.9 7 = 22.8 8 = 22.3 9 = 22.3 10 = 21.5
11 = 22.3 12 = 21.9
What will be the temperature of this room? First of all the randomness seems to be the precision of
the A/D converter since the steps are about 0.4

C. One way to visualize the data is to use a histogram,


a histogram divide the results into bins and shows the number of values in each bin. A histogram of the
values, created by Matlab, is shown in Figure 23.2.
The vertical axis indicate the number of values for each value step of the horizontal axis. The bin
indicate the value step of the horizontal axis and the bin width (size) or the number of bins is calculated
like:
/ =
r
max
r
min
:
where : is the number of bins and / is the width of each bin. Each bin will then contain the number
of observations (sensor device readings) as shown in Figure 23.2. A histogram can be used for both
continuous random variables and discrete random variables using a line for continuous random variables
and boxes for discrete random variables. Figure 23.2 shows discrete random variables and should give a
good idea about the mean of the temperature and the variance.
Some guidelines for the histogram:
1. Select between 5 15 bins,
2. Use the same width of each bin,
3. Cover the entire range of the data,
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 176
4. The bins should not overlap.
A historgram can be used to make an assumption about the mean of the values, the shape of distri-
bution, and the range of the values. A histogram can be a useful tool for getting a fast overview of the
data distribution.
23.3.1 Examples using the room temperatures
The values for the temperature in the room will then be (using Matlab)
mean r = 22.2

C
median r
n
= 22.3

C
standard deviation o = 0.38

C
variance o
2
= 0.15

C
2
mode : = 22.3

C
23.4 Probability
Probability is a numerical value expressing the likelihood of a occurrence of an event relative to all
possibilities in a sample space. For example tossing a dice, the probability is
1
6
of each number of the
dice.
Probability of occurrence of an event A is dened as the number of successful occurrences (:) divided
by the total number of possible outcomes (:) in a sample space, evaluated for : 1 giving
probability of event A = 1 () =
:
:
Some properties of probability:
1. Always a positive number between 0 and 1: 0 _ 1(r
I
) _ 1,
2. If an event is certain to occur: 1 () = 1,
3. If an event will never occur: 1 () = 0,
4. If an event

is the complement of event , meaning that if event occur,

can not occur:
1
_

_
= 1 1 (),
5. If the events and 1 are mutually exclusive, meaning that the probability of simultaneous occur-
rence of and 1 is zero, will the probability of event or 1 is: 1 ( or 1) = 1 () +1 (1),
6. If the events and 1 are independent of each other, meaning that their occurrences do not depend
on each other, will the probability of both events be: 1 (1) = 1 () 1 (1),
7. The probability of occurrence of event or 1 or both is represented by 1 (' 1) (A union B) is:
1 (' 1) = 1 () +1 (1) 1 (1) .
Example: In a measurement system there is 2% change that a sensor is defective and a 0.5% change
that the DAQ system is defective. The probability for both a defective sensor and defective DAQ system
is: 1 (1) = 1 () 1 (1) = 0.02 0.005 = 0.0001 giving 0.01%. The probability of having at least one
of them defective is: 1 (' 1) = 1 () +1 (1) 1 (1) = 0.02+0.0050.0001 = 0.0249 giving 2.49%
23.4.1 Probability Distribution Functions
An important function of statistics is to use information from a sample to predict the behavior of a
population. This approach is called use of an empirical distribution.
Experience has shown that the distribution of a random variable often follows certain mathematical
functions. Sample data can then be used to compute parameters in these mathematical functions and
the mathematical functions can be used to predict properties of the population.
These functions are divided into two main groups:
1. probability mass functions; used for discrete random variables,
2. probability density functions; used for continuous random variables.
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 177
Probability mass function
1. The sum of all probabilities:

n
I=1
1 (r
I
) = 1,
2. The mean of the population: j =

n
I=1
r
I
1 (r
I
) (also called the expected value of r),
3. The variance of the population: o
2
=

n
I=1
(r
I
j)
2
1 (r
I
).
Probability density function
1. The probability in a nite interval: 1 (a _ r _ /) =
_
b
o
) (r) dr,
2. The mean of the population: j =
_
1
1
r ) (r) dr,
3. The variance of the population: o
2
=
_
1
1
(r j)
2
) (r) dr.
23.4.2 Some probability distribution functions with engineering applications
The most common probability distribution functions are described briey.
Binomial distribution
Describes discrete random variables that can have only two possible outcomes (success and failure). The
following conditions must be satised:
1. Each trial in the experiment can have only the two possible outcomes of success or failure,
2. The probability of success remains constant throughout the experiment (j) ,
3. The experiment consists of n independent trials.
The properties are:
1. The probability: 1 (r) =
_
:
r
_
j
:
(1 j)
n:
where : is the number of independent trials, j is
the probability of success (constant throughout the experiment), and r is the number of success,
maximum :. The
_
:
r
_
=
n!
:!(n:)!
is called : combination r.
2. The expected number of success: j = :j,
3. The standard deviation: o =
_
:j (1 j)
Exercise 19 A manufacture of sensor devices claims that only 10% of the sensor devices must be repaired
within the warranty period. What is the probability that 5 sensors in a batch of 20 sensors need repair
during the warranty period?
Solution: Success is 100% 10% = 90% = 0.9,
_
:
r
_
=
_
20
15
_
=
20!
15!(2015)!
= 15.504, 1 (15) =
_
20
15
_
0.9
15
(1 0.9)
5
= 0.032 meaning that the probability is 3.2% that exactly 5 sensors out of 20
sensors requiring repair.
Poisson distribution
Used to estimate the number of random occurrences of an event in a specied interval of time or space
if the average number of occurrences is already known. The two assumptions underline the Poisson
distribution:
1. the probability of occurrence of an event is the same for any two intervals of the same length,
2. the probability of occurrence of an event is independent of the occurrence of other events.
The properties are:
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 178
Figure 23.3: The normal distribution with dierent means and variations.
1. The probability: 1 (r) =
t

X
o
r!
where ` is the mean number of occurrences during the interval of
interest.
2. The mean (expected value): 1 (r) = j = `,
3. The standard deviation: o =
_
`
Exercise 20 In average there will be 3 wrong readings from a sensing device every minute. Find the
probability of no wrong readings the next minute.
Solution: ` = 3 and r = 0 giving 1 (r) =
t
3
3
0
0!
= 0.05.
Normal distribution (Gaussian)
A simple distribution function that is useful for a large number of common problems involving continuous
random variables. The normal probability density function is:
) (r) =
1
o
_
2
c
(o,)
2
2o
2
where o is the standard deviation for the population and j is the mean of the population. Often the
normal distribution is denoted by:
r ~ A
_
j, o
2
_
where A() is the normal distribution, j is the mean, and o
2
is the variance. The normal distribution is
shown in Figure 23.3 with dierent means (j)(my) and variations (o)(Sigma).
Exercise 21 Use the temperature example values for mean and variation to make a normal distribution
model of the temperature sensor.
The normal distribution functions is used a lot in engineering and sensor devices and sensor measure-
ment. The condence intervals for the normal distribution function are:
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 179
Condence Condence
interval level (%)
1o 68.3
2o 95.4
3o 99.7
3.5o 99.96
Condence interval is an estimate of a population parameter. Instead of estimating the parameter
by a single value, an interval to include the parameter is given. Thus, condence intervals are used to
indicate the reliability of an estimate. How likely the interval is to contain the parameter is determined
by the condence level or condence coecient. Increasing the desired condence level will widen the
condence interval (www.wikipedia.org 2006).
23.4.3 Parameter estimation
In many experiments the sample size is small relative to the population, how to estimate the mean and
standard deviation of the whole population.
Population mean
The mean: j = r c or r c _ j _ r + c where c is an uncertainty and r is the sample mean. The
interval from rc to r+c is called the condence interval for the mean. The condence interval depends
on the condence level, a probability that the population mean will fall within the specied interval:
condence level = 1 ( r c _ j _ r +c)
The condence level is normally expressed in terms of a variable c called the level of signicance.
condence level = 1 c
where c is the probability that the mean will fall outside the condence interval.
The central limit theorem makes it possible to make an estimate of the condence interval with a
suitable condence level. The central limit theorem states that if : is suciently large from a population,
the r
I
will follow a normal distribution, and the standard deviation of these means is given by:
o
r
=
o
_
:
The standard deviation of the mean is also called the standard error of the mean. Important conclusions
from the central limit theorem:
1. If the original population is normal distributed, the distribution of the r
I
is normal distributed,
2. If the original population is not normal distributed and : is large (: 30), the distribution of the
r
I
is normal distributed,
3. If the original population is not normal distributed and : < 30, the distribution of the r
I
is normal
distributed only approximately.
If the sample size is large, the central limit theorem can be used directly. Since r is normally
distributed, we can dene (using statistic):
. =
r j
o
r
and estimate the condence interval on .. This is shown graphical in Figure 23.4. If . = 0 means that
r has the value of the population mean j.
The true value of j will however lie somewhere in the condence interval
_
.
c
2
, .
c
2

and the probability


that . lies in the condence interval is
1 (.) = 1 c
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 180
Figure 23.4: Concept of condence interval of the mean (Wheeler & Ganji 2004).
This gives:
1
_
.
c
2
_ . _ .
c
2

= 1 c
Substituting for . :
1
_
.
c
2
_
r j
o
r
_ .
c
2
_
= 1 c
Substituting for o
r
:
1
_
.
c
2
_
r j
c
p
n
_ .
c
2
_
= 1 c
Rearranged:
1
_
r .
c
2
o
_
:
_ j _ r +.
c
2
o
_
:
_
= 1 c
meaning that:
j = r .
c
2
o
_
:
with condence level = 1 c.
23.4.4 Criterion for rejecting questionable data points
In some experiments one or more measured values appear to be out of the line with the rest of the data.
These data are known as wild or outlier data points and should be removed from the data set. One way
can be to dene the data being outside the 3o condence interval to be outliers, but remember that it
can be wrong to remove any of these data set as they can describe problems with the sensing devices or
the measurement system.
23.4.5 Correlation of experimental data
Correlation coecient
Scatter due to random errors is a common characteristic of virtually all measurements. In some cases
the scatter may be so large that it is dicult to detect a trend. See Figure 23.5 where sub Figure (a)
shows a strong relationship between r and j, in sub Figure (/) there seems to be no relationship between
r and j, in sub Figure (c) we will not be certain.
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 181
Figure 23.5: Data showing signicant scatter (Wheeler & Ganji 2004).
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 182
Figure 23.6: The relationship and correlation coecient.
A statistical parameter called the correlation coecient can be used for checking the trend of a data
set. If we have two variables, r and j, and our experiment yields a set of n data pairs [(r
I
, j
I
) , i = 1, :],
the linear correlation coecient will be (sample correlation):
r
r
=

n
I=1
(r
I
r) (j
I
j)
_

n
I=1
(r
I
r)
2

n
I=1
(j
I
j)
2
_1
2
where
r =

n
I=1
r
I
:
j =

n
I=1
j
I
:
Remember that these equations are only valid for the sample correlation, not the whole population
correlation. The sample correlation coecient, r
r
, will indicate the relationship between the measured
variables r and j. The range of r
r
will be [1, 1] where 1 will indicate a perfectly linear relationship
with a negative slop, 0 will indicate no relationship, while +1 will indicate a perfectly linear relationship
with a positive slop. See Figure 23.6.
The correlation coecient r
r,
for a population between to random variables r and j with expected
values j
r
and j

and standard deviations o


r
and o

is denes as:
r
r,
=
co(r, j)
o
r
o

=
1
_
(r j
r
)
_
j j

__
o
r
o

Least-squares linear t
It is a common requirement in experimentation by tting sample data to mathematical functions such
as straight lines and exponentials. One of the most used function is a strait line:
1 = ar +/
from a set of : pairs of data (r
I
, j
I
) .To estimate the constants a and / the method of least squares (or
linear regression) is used to t the data. For each value of r
I
there will be an error:
c
I
= 1
I
j
I
and the square of the error is:
c
2
I
= (1
I
j
I
)
2
= (ar
I
+/ j
I
)
2
The sum of the squared error is:
1 =
n

I=1
(ar
I
+/ j
I
)
2
The solution for a and / will then be (minimizing, by setting the derivate to zero):
01
0a
= 0 =
n

I=1
2 (ar
I
+/ j
I
) r
I
01
0/
= 0 =
n

I=1
2 (ar
I
+/ j
I
)
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 183
These two equations can be solved simultaneously for a and b (Wheeler & Ganji 2004):
a =
:

r
I
j
I
(

r
I
) (

j
I
)
:

r
2
I
(

r
I
)
2
/ =

r
2
I

j
I
(

r
I
) (

r
I
j
I
)
:

r
2
I
(

r
I
)
2
The resulting line, 1 = ar +/ is the least-squares best t for the data.
The t of the data can be determined by a coecient of determination, given by:
r
2
= 1

(ar
I
+/ j
I
)
2

(j
I
j)
2
For engineering data the r
2
should at least be 0.8, a value of 0.8 0.9 is good indication of a linear
regression.
Outliers in r j data sets
Having a set of data pairs (r
I
, j
I
) is is possible to check the outliers in the data set. First make a
least-squares best t of the data and plot the line and the data. A visual check of the data may show
data with much larger deviation from the line then other data and these may be treated as outliers.
Another solution is to calculate the residuals for the data set:
c
I
= 1
I
j
I
and plot these residuals (c
I
). Assuming a normal distribution of the residuals we can expect 95% of the
residuals to be within the range of 2.
This can however give complications like:
1. If the number of samples in the data set is small giving an invalid range of the condence interval.
2. If the data is not linear, giving higher values of the residuals.
Multiple and polynomial regression
Regression analysis is more general than the least-squares best t. In multiple regression, the function
will be:
1 = a
0
+a
1
^ r
1
+a
2
^ r
2
+.. +a
|
^ r
|
with several independent variables, ^ r
1
, .., ^ r
|
. The ^ rs can be independent variables or functions of the
independent variables like:
^ r
1
= r
1
^ r
2
= r
2
^ r
3
= r
1
r
2
The way of solving is the same as simple linear regression, using the error:
c = 1
1
j
1
= a
0
+a
1
^ r
1I
+a
2
^ r
2I
+.. +a
|
^ r
|I
j
1
The sum of all errors is then:
1 =

(a
0
+a
1
^ r
1I
+a
2
^ r
2I
+.. +a
|
^ r
|I
j
1
)
2
and then minimizing E by partially dierentiating with respect to each a and setting all resulting equation
to zero.
Many physical relationships cannot be represented by a simple straight line, but can easily t with a
polynomial. The form of a polynomial regression equation is:
1 = a
0
+a
1
r +a
2
r
2
+... +a
|
r
|
where k is the degree of the polynomial. This can be solved by statistical programs by input of data
and the order of the polynomial desired. It can also be solved by using a subset of multiple regression
by making the ^ rs be r, r
2
, r
3
, etc.
CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 184
23.5 Uncertainty budget
An uncertainty analysis must be performed when dealing with sensors. The analysis will take into
consideration the uncertainty of all device in your system, mainly the sensor devices, the A/D converter,
and any actuators. The budget will be an table showing the uncertainty of the values used in your
system. An example of such a table can be:
Sensor Range Accuracy Note
A/D converter
Sensor type A ... ...
Sensor type B ... ... ...
Sensor type C ... ...
... ... ...
Device type A ... ... ...
Device type B ... ...
The table must contains all devices with the necessary parameters and/or properties that are impor-
tant for the accuracy of your measurement system. Do NOT include devices that will not inuence on
the accuracy.
Chapter 24
Calibration
24.1 Introduction
Calibration is a measuring comparison between two sensors or measurement systems. One sensor or
measurement system is to validated or controlled, the other sensor or measurement system is the truth.
The truth has to be a sensor or measurement system calibrated according to national or international
standards with traceability. A sensor device require periodic calibration in order to maintain the accuracy
and traceability to recognised industry standards.
In the past 10 to 15 years there have been a lot of low maintenance sensor devices introduced to the
market. Field engineers or instrument engineers have a perception of these sensor devices that they can
work for a long time between the calibration (up tp ten years), but this is not necessarily true. The
performance often depends on the cleanliness and adjustment of pipes and electrodes, not to meantion
corrosion within the pipe or electrodes.
The most used calibration interval is annual calibration, but semiannually or even quartly is used. The
problem is that the sensor devices normally most be removed from the plant and sent to an calibration
laboratory. The best solution is however to have a rig so the sensor devices can be calibrated on site.
An example of a plant with an integrated calibration system is shown in Figure 24.1.
As a rule of thumb should the truth sensor or measurement system have an accuracy 3 5 times
better than the sensor or measurement system to be validated.
The reason for calibration is to document the error of measurement and the uncertainties in the
measurements. The calibration should end up in some sort of document like a certicate for the validated
sensor or measurement system.
The calibration certicate document the accuracy of the sensor or measurement system. The main
reasons for calibration are:
1. optimize the process,
2. increase capacity,
3. higher quality,
4. safety,
5. quality of measurements.
The calibration of the instruments should by controlled by a system. The system can be manual or
fully automatic, or often something in between. Figure 24.2 shows the relationship between the savings
of time and cost, quality, and the type of calibration system. The main reasons for using a calibration
system are:
1. Improve eciency; cut production down-time, simplify and automate calibration work,
2. save costs; optimize the calibration frequency and cut production down-time,
3. improve quality; automation of calibration data.
185
CHAPTER 24. CALIBRATION 186
Figure 24.1: An example on how the sensor devices can be calibrated on site. The calibration system
requires a calibration management level and a calibration and documentation level shown as the
levels between the instruments and the plants (www.beamex.com: JAN-10).
Figure 24.2: The relationship of savings and quality regarding the type of calibration system
(www.beamex.com: JAN-10).
CHAPTER 24. CALIBRATION 187
24.2 Calibration process
When to calibrate a sensor or measurement system:
1. Fiscal measurement (trade measurement),
2. Requirements (e.g. fuel pumps and weights),
3. Regulations,
4. Certied companies,
5. Final control/test.
Calibration is not necessary:
1. Check measurement in production,
2. Production without specications.
Accuracy:
1. is a measurement on how well the sensor or the measurement system is able to measure the truth
value,
2. is documenting the following measurement errors during calibration:
(a) nonlinearity; an error specifying the maximum deviation of the real transfer function from the
approximation straight line (Fraden 2004),
(b) hysteresis; an error specifying the maximum deviation of the output when the sensor or the
measurement system is approaching from the opposite directions (Fraden 2004),
(c) repeatability; an error specifying the inability of a sensor or measurement system to represent
the same value under identical conditions (Fraden 2004).
3. is stated as:
(a) percent of the range, most often full scale range (FS),
(b) percent of current reading (O.R or RDG),
24.3 Calibration of sensors
Sensor can be calibrated by rst making a calibration operation and then a validation operation. In the
calibration operation the sensor is tested with dierent environment parameters within the range of the
sensor and a set of calibration data are stored for the sensor. This is shown in Figure 24.3.
The calibration certicate will contain part of the log data. The log data can now be used for making
a calibration set for the sensor, stored either in the sensor or as a calibration data le for an external
system. The sensor with the calibration data must be validated as shown in Figure 24.4.
The sensor with the calibration data will also have some sort of a calibration certicate. If the
calibration data are downloaded into the sensor, this will be the nal calibration certicate. If the
calibration data are on an external le/system, this will often be an additional calibration certicate.
Figure 24.5 shows a non-linear sensor signal calibrated to a linear output signal. Often will the
nonlinearity depend on a specic sensor element, so every sensor device will have dierent calibration
data. The calibration data can be used in dierent ways as:
1. Stored and used in the sensor device. The output from the sensor device is a calibrated signals.
The sensor device needs storage and calculation capabilities,
2. Stored in the sensor device. The output from the sensor device is a non-linear signal and the
measurement system most calibrate the signals using data from the sensor device,
3. Stored in the measurement system. The output from the sensor device is a non-linear signal and
the measurement system most calibrate the signals using data stored in the measurement system.
CHAPTER 24. CALIBRATION 188
Figure 24.3: Calibration of a sensor.
Figure 24.4: The validation of a calibrated sensor.
Figure 24.5: The calibration of a non-linear sensor signal to a linear output signal.
CHAPTER 24. CALIBRATION 189
Figure 24.6: Part of a calibration certicate of a Schaevitz pressure sensor. The list shows the sensor
output in mV at dierent temperatures and pressures in the pressure range of the sensor. The serial
number and type of sensor is shown at the top of the certicate (a copy from KROHNE Skarpenord in
Norway).
24.4 Calibration Certicate
Figure 24.6 shows the calibration certicate for a specic pressure sensor, the type and serial number is
stated in the certicate. The output is in mV and is measured at dierent pressures and temperatures
inside the range for the sensor.
Part V
Documentation
190
Chapter 25
Guidelines for planning experiments
This chapter contains guidelines for designing and documenting experimental tasks.
25.1 Overview of an experimental tasks
In order to have the best results from experimental tasks a systematic approach should be taken. The
steps for an experimental task will be:
1. Problem denition,
2. Design of the experiment,
3. Construction and development for the experiment,
4. Data gathering,
5. Data analysis,
6. Interpreting the results,
7. Conclusion(s) and reporting.
25.1.1 Problem denition
Very often engineers spend insucient time in analyzing and dening the problem, starting the design
process without being aware of what the problem actually is and getting an overview of the possible
options.
25.1.2 Experimental design
This section is a major portion of any experimental programs and will include some or all of the following
parts:
1. Determining the schedule and costs. The Engineer should always start any experiments making a
time schedule, for example a Gantt chart. This will force the engineer to think of all sub tasks in
the experiments and try to estimate the time usage of each sub task,
2. Search for information, often a literature survey,
3. Determining the experimental approach,
4. Determining the analytical model(s) used to analyze the data,
5. Specifying the measured variables,
6. Selecting the experimental units like instruments etc.,
7. Estimating the experimental uncertainties,
191
CHAPTER 25. GUIDELINES FOR PLANNING EXPERIMENTS 192
8. Determining the test matrix, values of independent variables to be tested,
9. Performing a mechanical design of the test rig. Remember to take into account that devices for
the test rig has dierent delivery times,
10. Specifying the test procedure.
25.1.3 Experimental construction and development
The is often the most expensive portion of the experimental task, both in time and cost. Building the
test rig will take some time and there will always be some sort of problems in ordering the parts, delivery
time and/or the verication of the test rig. Please take this into consideration when making the time
schedule.
25.1.4 Data gathering
This part is gathering the data from the experiments after the test rig has been veried and debugged.
25.1.5 Data analysis
Very often will computer programs be used for the data analysis, ready made programs or developed for
this purpose. Also remember to add time for designing, testing, and debugging these programs as well.
25.1.6 Interpreting the results
After the data analysis period the data must be interpreted. Logical reasons should be developed to
explain the trends of data and remember to comment on all type of anomalous data. Comparison and
validation between dierent type of experiments are important.
25.1.7 Conclusion and reporting
The conclusion will be the results of the data interpretation. These results should often be documented
in a report.
25.2 Activities in experimental projects
Experimental projects meaning doing some experimental tests that will take some time and cost, and
the results will very often be of interest of other.
25.2.1 Scheduling
Scheduling is an important task to get a detailed overview of all the sub tasks in order to estimate the
delivery date and important dates during the project. It is impossible to verify the scheduling in advance,
but will give the engineer some practice in structure a project and estimate the time of the sub tasks.
Planning is part of scheduling and is important because:
1. Other projects may depend on this project,
2. Better utilization of the resources,
3. Complete all tasks before a deadline,
4. Be able to adjust the activity as early as possible,
5. The cost of the project.
25.2.2 Cost Estimation
Cost estimation will be the cost of time or usage of all the resources available in the project, any
construction of a test rig, and the devices needed for collecting the data.
CHAPTER 25. GUIDELINES FOR PLANNING EXPERIMENTS 193
25.2.3 Dimensional analysis
This analysis is to gure out the number of variables in an experiment. This analysis is to nd out the
number of dimensional variables and dimensionless variables. The Buckingham theorem states: If
there are : dimensional variables describing a physical phenomenon [
1
,
2
, ..,
n
], then there exists a
functional relationship between these variables (Wheeler & Ganji 2004):
) (
1
,
2
, ..,
n
) = 0
This means that if the list is complete (and relevant), there exists a solution in nature to the problem.
Then there exists a set of (::) dimensionless variables [
1
,
2
, ..,
nn
] describing the same physical
problem (Wheeler & Ganji 2004). These dimensionless parameters are related by another functional
relationship (Wheeler & Ganji 2004):
1 (
1
,
2
, ..,
nn
) = 0
The results of this theorem are (Wheeler & Ganji 2004):
1. A physical problem can be described using a suitable set of dimensionless parameters,
2. The set of dimensionless parameters has fewer members than the set of dimensional variables.
Several techniques can be used to determine a set of dimensionless parameters from the dimensional
variables. The resulting set of dimensionless parameters describing a physical problem is not unique;
there are usually alternative sets, but some are preferable for practical and historical reasons.
The goal is to reduce the number of dimensional variables, to simplied the experiments.
25.2.4 Determining the Test Rig Scale
The Test rig scale will depend on the physical phenomena that you are going to study and the type of
measurement your are going to use. Use the dimensional variables to decide the size and eect of the
rig.
25.2.5 Uncertainty Analysis
The uncertainty analysis is always important for any engineering experiment as you will always deal with
some type of measurement. The uncertainty analysis will be important in both the design phase and the
data analysis phase.
In the design phase must the equipment be selected with sucient accuracy so the data can be
analyzed in the analysis phase. The result should be a table of a uncertainty budget.
The analysis phase must contain some information about the systematic and random errors, and
estimates various loading and installation errors.
25.2.6 Calibration/testing
Before starting the experiments should the rig be tested and calibrated. This is also called the shakedown
tests. The rig should rst be tested with known input and output parameters to check the correct function
of the rig. The the rig should often be calibrated so you know the relationship between the a set of specic
inputs and the corresponding outputs.
The testing can be done in two dierent ways:
1. Static mode: When the rig is empty or not operating, the output signals should be known,
2. Special cases: Using a set of predened input signals/states should give a known set of output
signals.
It is very important to verify that the rig is working correctly before starting the experiments!
CHAPTER 25. GUIDELINES FOR PLANNING EXPERIMENTS 194
Figure 25.1: The results from the test matrix (Wheeler & Ganji 2004).
25.2.7 Test Matrix and Test Sequence
An experiments seeks to determine the relationship between one or more dependent variables, responses,
and a set of independent variables, often called factors. A test matrix can be used for these factors to
dene the test conditions for the experiments. Dene the range for each factor and dene the values that
deserves some attention for each factor.
For many factors these test matrix can be complex. One solution can be to make a test matrix for a
set of the factors, varying a subset, and plot the results from all these experiments in one diagram. One
example of such a table is show as:
Y values
X values 1
1
1
2
1
3
1
4
1
5
A
1
A
2
A
3
This table contains 15 measurement, the number of measurement will depend on the selected values
for the factors. The combination of these experiments are shown in Figure 25.1.
The test sequence is important, which condition of the experiments should be done in which order.
In some experiments can the order be random, in other type of experiments must the order of the dened
before starting. This will depend highly of the type of experiments, but should be dened during the
planning of the tests.
25.2.8 Documenting Experimental Activities
Documentation may be boring, but still very important. Experiments may be individual or group based.
Individual experiments means that only you will plan the experiments, do the experiments, and make
the documentation. A group based experiments will also involve some more project activities.
25.2.9 Group projects
Larger experimental projects must consist of several engineers working together. It is important that
these engineers are working together as a group, not as individuals. This means that they should have
some regular meetings to discuss the progress and the sub tasks in the project. These meetings should
be divided into formal and non formal meetings.
Group projects should always have a project leader.
Chapter 26
Meetings
Non formal meetings
Small meetings, often short meetings once a week. A good advice will be to include an agenda in an
email prior to these meetings, often from the project leader.
Formal meetings
Formal meetings include a Notice of meeting and a Minutes of meeting and is often used in connection
with milestones in the project. The milestones will be part of the planning and should be part of the
schedule.
Notice of meeting should contain:
1. Name of the project; which project is the meeting for,
2. Place; the location, often a building and room number,
3. Date and time,
4. The name of the members of the meeting,
5. The agenda.
Minutes of meeting should contain:
1. Name of the project,
2. The name of the members of the meeting (indicate who was present and who was not present),
3. The date and time,
4. A table of the subject discussed on the meeting. Very often will this table contains 3 columns, the
ID, the subject, and the responsible. An example of such a table can be:
ID Subject Responsible
1 Subject #1 discussed ..
2 Subject #2 discussed .. Name and date
.. ..
N Subject #N discussed .. Name and date
Eective meetings
1. Determine whether the meeting is necessary or not,
2. Be precise, do not let the latecomers control the start of the meeting!
3. Be prepared,
195
CHAPTER 26. MEETINGS 196
4. Have an objective, what should be the result of the meeting,
5. Have an agenda,
6. Start with the most important items,
7. Be clear about the responsibilities,
8. Document the meeting.
Chapter 27
Guidelines for documenting
experiments
A report will be a document that communicates the work that has been carried out during some sort of
experiments, group work, literature study, evaluations, and any combination. It is important to gure
out who will be the reader of the report and the competence, experience and/or skills of these readers,
and adjust the contents of the report to these readers.
27.1 Informal report
An informal is mainly used for internal purposes, and should normally be as brief as possible. The main
focus is to document an experiment, a meeting, or a discussion. This report should normally include
an introduction, body, conclusion and recommendations. Always start the report with to whom the
report is going, whom it is from, the date and the subject. After the conclusion include your contact
information if needed, at least the company, email and phone number. The "Recommendations" section
can be used to list any other people who support the information of your report.
27.2 Formal report
A report shall communicated the work done and the results of the work. Remember that the grading or
any decisions will normally only be based on the information in the report. Therefor focus on getting all
the necessary information into the report, the readers. A formal report should contain:
1. Title page; title, author, and date,
2. Abstract (or summary); brief problem description, brief description of methods, and the main
results,
3. Preface; what the report is about, any changes of tasks, and credits,
4. List of tables and gures (optional)
5. Table of Contents
6. Introduction; short about the background, previous work, and new work,
7. Problem description; process and equipment description, measurement setup, etc.,
8. Theory; model and/or method development,
9. Methods; Method development,
10. Results; simulation results, model tting, optimization results,
11. Discussions; the results, as expected or what are causes, uncertainty, remaining work,
12. Conclusions; focus on the results and what you have learned,
197
CHAPTER 27. GUIDELINES FOR DOCUMENTING EXPERIMENTS 198
13. Appendices; task description, details, listing, data sheets, etc,
14. References; necessary information for nding the references,
15. Index (optional).
27.3 References
Literature references can be done in several ways and it is up to writer to select the right style. The
most used styles are the Harvard style and the Vancouver style. Do NOT mix these styles within the
same document.
References to web pages should be avoided, but if you really need to use such a reference, you
should include the author (if the name of the author is not available, you may use the name of the
organization/institution) as well as the title of the web article, the URL and the date when you accessed
the web page.
27.3.1 Harvard style
The Harvard style is recommended for making references. In this style, the reference in the text body
should be placed in parentheses after (or in some cases inside) the sentence. It should include the authors
name and the year of publication (example (Flor, 2006)).
The references should be listed in detail in a separate list at the end of the report. They should be
listed alphabetically.
27.3.2 Vancouver style
You should use the Vancouver style instead if you decide not to use the Harvard style (the name-and-year
style). In this style, the references in the text are given as numbers in brackets (example: [3]), and the
references in the reference list should be listed using the numbers in brackets as well. When using this
style, the references should be given according to the order of appearance in the body of the text.
27.4 Article or paper
An article or paper is used to document any research work and should contain the following sections:
1. Introduction; introduce the problem you will be discussing in your article or write a short story of
your experience with the problem.
2. Methods; discuss the methods that you want to use on the problem or challange you outlined in
the introduction. Break up each point into separate paragraphs,
3. Results; discuss the results of any experiments or simulations used with the methods. Break up
each point into separate paragraphs,
4. Discussion; this should include a brief summary of the article.
A mnemonic rule can be IMRAD (Introduction, Methods, Results, And Discussions).
Bibliography
Bentley, J. P. (2005), Principles of Measurement Systems, 4th. edn, Pearson Education Limited, Essex,
England.
Caro, R. H. (2004), Automatiom Network Selection, ISA-The Instrumentation, Systems, and Automation
Society.
Cravotta, R. (2008), Sensor-rich designs, EDN Europa pp. 2428. www.edn-europa.com.
Fortuna, L., Graziani, S., Rizzo, A. & Xibilia, M. (2007), Soft Sensors for Monitoring and Control of
Industrial Processes, Springer, London, UK. ISBN 1-84628-479-1.
Fraden, J. (2004), Handbook of Modern Sensors, 3. edn, Springer, USA.
Furenes, B. (2009), Methods for batch-to-batch optimization of parallel production lines, PhD Trial
lecture.
Ifeachor, E. C. & Jervis, B. W. (2002), Digital Signal Processing, A Practical Approach, 2nd. edn, Pearson
Education Limited, Essex, England.
Int (1988), 386 Microprocessor, High performance 32-bit CHMOS microprocessor with integrated memory
management.
Kester, W. (1999), Practical Design Techniques for Sensor Signal Conditioning, Analog Devices (Prentice
Hall), Norwood, MA, USA.
Kirrmann, H. (2007), OLE for process control (OPC), Technical report, ABB research Centre, Baden,
Switzerland. Industrial Automation, OPC, Data Access Specication.
Krogh, E. (2005), OPC Seminar, Prediktor AS, Fredrikstad, Norway. www.prediktor.no (feb-07).
Mackay, S., Wright, E., Park, J. & Reynders, D. (2004), Practical Industrial Data Networks; Design,
Installation and Troubleshooting, Elsevier (Newnes), Oxford, UK.
Mat (1999), Data Acquisition Toolbox Users Guide, 5 edn.
Meier, P. C. & Znd, R. E. (2000), Statistical Methods in Analytical Chemistry, second edn, John Wiley
and Sons, Inc., New York, NY 10158-0012, USA.
Olsen, O. A. (2005), Instrumenteringsteknikk (in Norwegian Only), Tapir Akademisk Forlag, Trondheim,
Norway.
Olsson, G. & Piani, G. (1998), Computer Systems for Automation and Control, 2nd edn, Prentice Hall
International (UK) Ltd., London, UK.
Olsson, G. & Rosen, C. (2003), Industrial Automation - Application, Structures and Systems, Lund
University, Lund, Sweden.
Pearson, R. K. (2005), Mining Imperfect Data, Dealing with Contamination and Incomplete Records,
Society for Industrial and Applied Mathematics (SIAM), Philadelphia, USA.
Pettersen, O. (1984), Sanntidsprogrammering for Prosess-Styring (in Norwegian), 4th edn, Tapir forlag,
Trondheim, Norway. ISBN 82-519-0263-0.
199
BIBLIOGRAPHY 200
PROFIsafe (2009), Pc-based, but safe!, Control Engineering Europe November/December, 2830.
www.controlengeurope.com.
Rausand, M. & Hyland, A. (2004), System Reliability Theory, Models, Statistical Methods, and Appli-
cations, 2nd edn, Wiley Interscience, John Wiley and Sons, Inc. Hoboken, New Jersey, Usa.
Ripps, D. L. (1989), An implementation guide to real-time programming, Prentice-Hall, Inc.
Schultz, T. W. (1999), C and the 8051, Building Ecient Applications, Prentice Hall PTR, New Jersey,
USA.
Skavhaug, A. & Pettersen, S. (2007), Wireless technology - something for safety - related applications?.
Wireless seminar at HydroStatoil/Trondheim 13-DEC-2007.
Skeie, N.-O. (2008), Soft Sensors for Level Estimation, PhD thesis, The Norwegian University of Science
and Technology (NTNU).
Wheeler, A. J. & Ganji, A. R. (2004), Introduction to Enineering Experimentation, 2. edn, Pearson,
USA.
www.matrikon.com (2007), Matrikon. OPC information.
www.wikipedia.org (2006), Wikipedia. Wikipedia, the free encyclopedia.
www.wikipedia.org (2010), Wikipedia. Wikipedia, the free encyclopedia.
Index
4-20 mA, 143
accuracy, 139
ADC
integrating, 137
sigma-delta, 136
successive-approximation, 136
amplier, 109
ATEX, 99
bandwidth, 110
big endian, 128
Bluetooth, 154
Bode diagram, 111
bytes, 127
calibration, 185
process, 187
sensor, 187
channel skew, 162
check
limit, 142
redundancy, 142
validation, 142
CMRR, 111
COM
Component Object Model, 23
common mode rejection ratio, 111
communication
current loop, 143
network, 145
serial, 145
wireless, 145
control loop, 153
critical region, 74
data acquisition system, 124
DDE
Dynamic Data Exchange, 23
deadband, 35
deadlock, 72
distributed system, 157
DMA, 142
event, 61, 68
lter
band-pass, 114
band-stop, 114
Bessel, 115
Butterworth, 114
Chebyshev, 114
nite impulse response (FIR), 118
high-pass, 114
innite impulse response, 118
limit check, 142
low-pass, 114
moving average, 118
redundancy check, 142
validation check, 142
Firewire, 145
global positioning system, 152
GPIB, 145
GPS, 152
Graphical User Interface (GUI), 9
HAL
Hardware Abstraction Layer, 92
HART, 99, 156
HART protocol, 121, 145
histogram, 175
Human-Machine Interface (HMI), 9
input loading, 113
input signal
ADC, 134
analog, 134
dierential (DI), 140
digital, 130
multiplexer, 131
single ended (SE), 140
instrumentation bus, 145
interprocess communication
IPC, 70
interrupt, 61, 82, 142
latency, 61
little endian, 128
LXI, 145
Man-Machine Interface (MMI), 9
mean time between failures, 7
MEMS, 107
model
client/server, 27
instrumentation amplier, 113
publisher/subscriber, 28
sensing device, 113
multitasking, 61
201
INDEX 202
noise, 122
outlier, 163
output loading, 113
output signal
analog, 133
DAC, 133
digital, 131
PCMCIA, 145
polling, 82, 142
Posix.4
Programming, 87
precision, 139
preemption, 61
priority, 61, 82
priority inheritance, 83
priority inversion, 83
protocol, 121, 122, 129, 145
CSMA/CD, 70
Ethernet POWERLINK, 71
HART, 121, 145
Pronet IRT, 71
token ring, 70
wireless, 146, 154
PXI, 145
real-time
deadline, 60
denition, 60
event, 61
interrupt, 61
interrupt latency, 61
multitasking, 61
preemption, 61
priority, 61
resource, 60
scheduler, 61
simultaneousness, 60
redundancy, 7
reference voltage, 140
resolution, 139
resource, 72
RFID, 150
RS-485, 145
RTOS requirement, 90
Sample and Hold (S/H), 162
sampling
aliasing, 163
SAS
Safety and Automation System, 18
satellite navigation system, 152
SCADA, 9
scheduler, 61, 62
strategies, 65
semaphore, 67
sensor, 98
absolute, 100
active, 99
passive, 99
relative, 100
sensor lter
nite impulse response (FIR), 118
innite impulse response (IIR), 118
sigma-delta, 136
signal
amplication, 109
attenuation, 114
combiner, 120
conversion, 120
dierentiation, 120
ltering, 114
integration, 120
linearization, 120
signal converters, 120
frequency to current, 120
frequency to voltage, 120
voltage to current, 120
SIL
Safety Integrity Level, 19
Simultaneous Sample and Hold, 162
Smart Sensors, 154
SOA, 45
software
process, 77
task, 80
thread, 79
successive-approximation, 136
tag, 33
task state
o-ine, 65
ready, 65
running, 65
waiting, 65
transducer, 98
transmitter, 99
uncertainty
analysis, 184
budget, 184
USB, 145
User Interface (UI), 9
watchdog, 61
Wireless
Bluetooth, 156
ZigBee, 155
WirelessHART, 154, 156
XML
Extensible Markup Language, 43
ZigBee, 154
devices, 155

You might also like