You are on page 1of 24

Accuracy and Relevant Effects

7-1

ACCURACY AND
RELEVANT EFFECTS

Errors, like straws, upon the surface flow;


He who would search for pearls must dive below.
- John Dryden

Course 9050 - October 1996

Principles of Instrumentation and Control

7-2

Accuracy and Relevant Effects

Synopsis
Definitions of accuracy and point accuracy precede a detailed discussion of relevant
terminology necessary for interpreting instrument accuracy specifications: zero and
span, systematic error, turndown (rangeability), precision, reproducibility, repeatability
and sensitivity.
The environment in which an instrument operates effects its accuracy. Consequently,
terminology relating to functional specifications is introduced.
A system for comparing transmitter specifications of different manufacturers is outlined.
The calibration of transmitters requires an understanding of other error sources: linearity,
hysteresis, repeatability, reproducibility, dead band.
Methods, including MTBF, are examined as means of calculating system errors.

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7-3

Accuracy
1.

The degree to which an indicated value matches the actual value of a measured variable.

2.

Quantitatively, the difference between the measured value and the most probable value for
the same quantity, when the latter is determined from all available data, critically adjusted for
sources of error.

3.

In process instrumentation, degree of conformity of an indicated value to a recognised accepted


standard value, or ideal value

Accuracy, Measured
The maximum positive and negative deviation observed in testing a device under specified conditions
and by a specified procedure.
Speaking more fundamentally, an accuracy of 99% is an inaccuracy of 1%. For convenience,
however, the latter inaccuracy has always been referred to as accuracy.
Sometimes an accuracy statement is spoken of in point accuracy terms.

Point Accuracy
The limits of error of an instrument may be expressed in a number of ways. In some cases the
point accuracy is given. This is the accuracy of the instrument at one point on its scale only, and
does not give any information on the general accuracy of the instrument.
Before we delve into specifications, lets take a look at some common terminology.

Terminology
RANGE: The region between the limits within which a quantity is measured, received, or
transmitted.
UPPER RANGE LIMIT (URL):

The highest quantity that a device can be adjusted to measure.

LOWER RANGE LIMIT (LRL): The lowest quantity that a device can be adjusted to measure.
UPPER RANGE VALUE (URV): The highest quantity that a device is adjusted to measure.
LOWER RANGE VALUE (LRV): The lowest quantity that a device is adjusted to measure.
SPAN: The algebraic difference between the upper and lower range values.
A typical instrument specification could thus look:
Example: Model 1151DP4 Calibrated 0 to 100 in. H20
Range = 0/25 to 0/150 Inches H20
URL = 150
Inches H20
LRL = -150
Inches H20
URV = 100
Inches H20
LRV = 0
Inches H20
Span = 100
Inches H20

Course 9050 - October 1996

Principles of Instrumentation and Control

7-4

Accuracy and Relevant Effects

Reading the Specifications


Once you have all the pertinent specifications, you
need to compare them equally among the various
transmitters. Equal comparisons require that you
read more than the numerical portion of the
specification. You need to read the wording that
goes along with the number. Is the specification
expressed as 0.2% of calibrated span, 0.2% of URL,
or 0.2% of reading? Per 50F, per 50C, or per
100F?For example, suppose you are looking at a
0.2% accurate transmitter that has an Upper Range
Limit (URL) of 150 inH 20. You will be calibrating
it to 0-100 inH 20 and typically reading it at 80
inH20. If the transmitters accuracy specification
Fig. 7.1 Error as Percent of span or Reading is read as:
0.2% of calibrated span, then the reading will be
80 0.2 inH 20 (0.2% of 100 inH20 = 0.2inH20)
0.2% of URL, then reading will be
80 0.3 inH 20 (0.2% of 150 inH20 = 0.3 inH20)
0.2% of reading, then the reading will be
80 0.16 inH 20 (0.2% of 80 inH20 = 0.16 in H20)
As you can see, there is quite a bit of difference between these transmitters which all have 0.2%
accuracy. The only time these errors are all equal is when the transmitter is calibrated and read at
the upper range limit.
What are the different types of errors that may be seen?
When reading specifications, you may find them expressed as a variety of errors. These are all
reflections of how The performance is affected. The three types of error commonly used in data
sheets are: zero error, span error, and systematic error. A seldom mentioned one is turndown error.

Zero Error is a shift of a constant magnitude between


the measured variable and the ideal variable. It is
normally measured at the zero reference point. With
this error, the starting point of the measured curve is
offset slightly and thus causes the entire curve to be
offset by an equal amount.

Fig. 7.2 Zero Error

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7-5

Span Error is the difference between the actual span and


the ideal span. With this type of error, the actual span may
be slightly larger or smaller than the ideal span. The
variables within the curve then would be proportionately
larger or smaller.
Zero and span error are sometimes expressed together as a
total error. This implies a maximum amount of error that
the measurement could have. It does not necessarily mean
that the two errors will be both in the same direction, nor
does it mean that they are divided equally.

Fig. 7.3 Span Error

Turndown error can occur when the transmitter is used at a


span other than its maximum span. It can be expected to occur when the specification is expressed
as a percentage of upper range limit (URL) or a percentage of maximum span. Turndown error is
proportional in magnitude to the amount of turndown. It is a constant error along the whole range
of the measurement. It is the same for all readings.
If an error, such as temperature effect, is expressed on a data sheet as 0.2% of URL and the URL of
the transmitter is 150 in H20, then at 150 in H20, the error would be 0.2% of URL (150) or 0.3 in
H20. This equates to 0.2% of the actual reading. If the same transmitter is read or calibrated at 50
in H20, the error would still be 0.3 inH20, but at that reading, the error is 0.6% of the reading.
A Systematic error is one that occurs during a number of measurements made under the same
conditions and to the same magnitude. It is a predictable, repeatable error.

Accuracy & Calibration


All measurement is comparison. When a length is measured it is compared with a fixed length, or
standard of length and the number of times the unknown length is greater than the standard is
found. The standard is chosen so that the number of times or numeric is not too large or too
small.
To ensure that the length, or any other measured quantity, as measured by one person, shall agree
with that as measured by a second person, the standards must be absolutely fixed and reproducible
with precision. Once a standard of length has been established, all measuring instruments based on
this standard are made to agree with the standard. In this way, the length of an object as measured
by one instrument will agree with the length as measured by any other instrument.
Accuracy can be obtained only if measuring instruments are periodically compared with standards
which are known to be constant; i.e. from time to time the instruments are calibrated.
As a consequence, the following terms become important
Precision. The precision of the readings is the agreement of the readings among themselves. If
the same value of the measured variable is measured many times and all the results agree very
closely then the instrument is said to have a high degree of precision, or reproducibility. A high
degree of reproducibility means that the instrument has no drift; i.e. the calibration of the instrument
does not gradually shift over a period of time. Drift occurs in flowmeters because of wear of the
differential-pressure-producing element, and may occur in a thermocouple or a resistancethermometer owing to changes in the metals brought about by contamination or other causes. As
drift often occurs very slowly it is likely to be unnoticed and can only be detected by a periodic
check of the instrument calibration.

Course 9050 - October 1996

Principles of Instrumentation and Control

7-6

Accuracy and Relevant Effects

A high degree of precision is, however, no indication that the value of the measured variable has
been accurately determined. A manometer may give a reading of 10 pounds per square inch for a
certain pressure to within 0.01 pounds per square inch, but it may be several pounds per square inch
in error, and the true value of the pressure will be unknown until the instrument has been calibrated.
It is accurate calibration that makes accurate measurement possible.

Fig. 7.4
1.

Bias error in not negligible, but precision is good (device 1.)

2.

Bias error is negligible, but precision is poor (device 2)

3.

Bias error is small and precision is good (device 3); this is an accurate device.

Sensitivity. The sensitivity of an instrument is usually taken to be the size of the deflection produced
by the instrument for a given change in the measured variable. It is, however, quite frequently
used to denote the smallest change in the measured quantity to which the instrument responds.
The largest change in the measured variable to which the instrument does not respond is called the
dead zone.
Rangeability. The rangeability of a measuring instrument is usually taken to mean the ratio of the
maximum meter reading to the minimum meter reading for which the error is less than a stated
value. For example: In a positive displacement flow meter a certain quantity of liquid passes the
fixed and moving parts. The maximum quantity which can be measured by the meter is usually
fixed by the meter size. Increasing the flow above the meter maximum will shorten the life of the
meter owing to greatly increased wear and is therefore highly undesirable. It will be seen from the
graph below that the minimum flow for which the meter accuracy is plus or minus 0.5 per cent is 10
per cent of the maximum reading. The rangeability for an accuracy of plus or minus 0.5 per cent of
true value is therefore 10 : 1.

Fig. 7.5

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7-7

Error Sources for Transmitters


Sources of errors are found in the process conditions. Temperature and static pressure are the two
most common error sources and the largest ones. They will be discussed below. Other error
sources which occasionally occur are vibration, power supply changes, and RFI. Any of these can
affect the overall performance of a transmitter.
A higher static pressure can create some performance errors. Often, the zero effect can be calibrated
out under a stable line pressure, leaving only the span error, which may or may not be systematic.
When static pressure varies, however, the zero error becomes more important and should be
considered in the overall performance.
Temperature error can result from both process and ambient temperature changes. Generally,
process temperature is usually stable while ambient fluctuates. It is not always simple to determine
which one causes the temperature effect errors. It is a fairly simple matter though, to re-zero a
transmitter after it has reached operating temperatures, assuming the process temperature is constant.
This would eliminate the zero error and leave only the span error. The transmitter is still subject to
ambient temperature changes.
An example of the effect of temperature and static pressure is:
On the bench, a transmitter is calibrated 0 to 100 inH20 at 75F and no static
pressure. The accuracy is 0.2% of span giving an error of 0.2 inH20. The
transmitter will be used at a line pressure of 1500 psi and in a location where the
temperature may vary by 50F.
The error contributed by a 50F temperature change from calibration conditions is predicted by its
temperature effect specification of 1.0% span/100F. This yields an additional error of 0.5 inH20
(1% of 100 inH20 x 50/100). The error contributed by a line pressure of 1500 psi is predicted by its
static pressure effect specification of 0.25% of reading per 1000 psi. This yields another error of
0.37 inH20 (0.25% of 100 inH20 x 1500/1000). If we were to add all these up, there is a worst case
error of:
Error allowed within accuracy limits:
0.2 inH20
+ error contributed by 50 0F temperature shift:
0.5 inH20
+ error contributed by 1500 psi static pressure:
0.37 in H20
= Worst case error:
1.07 in H20
This error is much worse than the extended accuracy of 0.2 inH20. A worst case error (total error)
assumes that the errors will all be at the maximum amount in the same direction. This is unlikely
to happen and would not represent typical performance of a transmitter.
Power Supply Effect
If a transmitter is operated at a different voltage in the field then it was calibrated with on the bench,
then variations in output (for the same input) can occur.

Course 9050 - October 1996

Principles of Instrumentation and Control

7-8

Accuracy and Relevant Effects

Vibration Effect
The effect upon output is solely due to the vibratory environment to which the transmitter is subject.
Mounting Position Effect
Difference in output when a transmitter is mounted in a position different to which it was calibrated.
Shows itself as a zero shift or systematic error which can be recalibrated away.
Load Effect
If the total loop should alter, then the output of the transmitter (for the same process input) may be
effected.
EMI/RFI Effect
An output change solely due to interference of an electromagnetic or radio frequency radiator.

Estimating Systems Accuracy


Whenever a number of devices - each with its individual error distribution - are combined in a
system, a statistical procedure must be used to combine the errors. Each instrument will have a
given accuracy rating of 0.5% of full scale, for example. This means that the worst error for that
instrument, observed over the full range of operation at the rated conditions, would be 0.5%. The
probability of this maximum error appearing at any point in time is in the order of 1 in 100.
If two dissimilar instruments (such as dp Cell transmitter and a recorder) are connected in series,
the probability of both exhibiting their maximum error at the same time is remote. If an error
distribution of both instruments were completely at random, the probability of their maximum
errors occurring at the same time is the product of their individual probabilities, which would be in
the order of 1 in 10,000.
The only way that the probability of a system error can be the same at that of the individual
components is if the individual errors are combined on a root-mean-square basis. For a system
consisting of the dp Cell transmitter and the recorder, each with 0.5% accuracy, the combined
error would be:
(.52 + .52) = (.25 + .25) = 0.707%
The other important rule that must be considered when calculating system accuracy is to apply the
proper gain to each signal. The gain of a ratio station varies with the ratio setting; that of a multiplier
varies with the second input. The gain of a square-root converter varies with the level of input (or
output) since it is a nonlinear device. With these devices, the output error varies with gain, which
varies with the operating level.
The overriding consideration for the instrument buyer is to ensure that one compares apples with
apples when Manufacturers specify differently.

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7-9

The following procedure can help:


1. List transmitter specifications

Specifications

Transmitter A

Transmitter B

Upper Range Limit (URL)

300 in H2O

300 in H2O

Accuracy

0.2% of span

0.1% of URL

Temperature Effect
Zero

0.5% of span per 1000F

Span

0.5% of span per 1000F


1.0% of URL per 1000F

Total*
Static Pressure Effect
Zero

0.25% of URL per 2000 psi

0.25% of URL per 2000 psi

Span

0.25% of reading per 1000 psi

.25% of span per 1000 psi

Total*

Table 7.1

2. Define operating conditions


Calibrated Span
Expected Temperature hnage
Expected Static Pressure
Expected Reading

0 to 100 in H20
500 F
500 psig
75 in H20

Table 7.2

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 10

Accuracy and Relevant Effects

3. Convert all of the errors into common terms:


Specifications

Transmitter A

Transmitter B

Accuracy

0.2% x 100 = 0.2 in H2O

0.1% x 300 + 0.3 in H2O

Temperature Effect
Zero

0.5% x 100 x 50/100 =


0.25 in H2O

Span

0.5% x 100 x 50/100 =


0.25 in H2O

Total*

0.25 + 0.25 = .50 in H2O

1.0% x 300 x 50/100


= 1.5 in H2O

Static Pressure Effect


Zero

Span

0.25% x 300 x 500/2000 =

0.25% x 300 x 500/2000 =

0.91 in H2O

0.19 in H2O

0.25% x 75 x 500/1000=

0.25% x 100 x 500/1000=

0.94 in H2O

0.12 in H2O

Total*

Table 7.3

4. Calculate Total Probable Error (TPE) = (A 3 + B2 + C2 ...)

TRANSMITTER A

TRANSMITTER B

[(0.20)2 + (.5)2 + (0.094)2 + (.19) 2] =


+ .58 in H2O

[(0.3)2 + (1.5)2 + (0.19) 2 + (0.12)2] =


+ 1.55 in H 2O

TPE = .58% for 100 in H2O span

TPE = 1.55% for 100 in H2O span


Table 7.4

Manufacturers may use any or all of the above terms when citing transmitter specifications; thus it
is imperative the entire specification is read and understood when evaluating transmitters.

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7 - 11

Factors Affecting Performance


Estimating Systems Accuracy
Linearity: The closeness to which a curve approximates a straight line. Linearity can be expressed
as independent, terminal-based, or zero-based linearity.
Independent:

The maximum deviation of the actual characteristic from a straight


line so positioned to minimise the maximum deviation.

Terminal-Based: The maximum deviation of the actual characteristic from a straight


line coinciding with the actual characteristic at the upper and lower
range values.
Zero-Based:

The maximum deviation of the actual characteristic from a straight


line so positioned as to coincide with the actual characteristic at the
lower range value and to minimise the maximum deviation.

Fig. 7.6 Independent Linearity


Fig. 7.7 Terminal-based Linearity

Fig. 7.8 Zero-based Linearity

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 12

Accuracy and Relevant Effects

Hysteresis:

The maximum difference for the same input between the up-scale and the downscale output values during a full range traverse in each direction.

Fig. 7.9

Repeatability:

The closeness of agreement among a number of consecutive measurements of


the output for the same value of the input under the same operating conditions,
approaching from the same direction, for full range traverses.

Fig. 7.10

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

Reproducibility:

7 - 13

The closeness of agreement among repeated measurements of the output for


the same value of input mode under the same operating conditions over a
period of time, approaching from both directions. Normally this implies a
long period of time. It includes hysteresis, drift and repeatability. Between
repeated measurements the input may vary within normal operating conditions.
Note that Reproducibility is a time based performance specification. This
characteristic is also implied when drift of stability is called out.
Reproducibility can be checked by a bench test (or laboratory, type calibration)
placing the unit in service under normal operating conditions, and after a
period of time bringing the unit out of service for another bench test.

Dead Band:

The range through which an input can be varied without initiating response.

Load Limitations

The maximum load that can be present in the loop for the transmitter to operate
over its full output range for a given power supply voltage.

Fig. 7.11
This is calculated by
Max Load (RL)

Power Supply Voltage - Lift off Voltage


Maximum Signal Current

Typical Lift Off Voltage would be 12V so for a Power Supply of 24 Volts and
20mA signal the maximum logs resistance would be:
RL

=
=

24 - 12
0.02
600 ohms

Specifications Supplied by Manufacturers


A. Functional Specifications
Functional specifications are used to describe the environment within which the instrument can
operate and still meet its performance specifications. The most common are discussed.
Service: Describes the processes that can be measured. E.g.: Liquid, gas, vapor
Output: The type of signal representing the variable that is delivered by the instrument.
e.g.:

4 - 20 mA, 3-15 psi, Digital

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 14

Accuracy and Relevant Effects

Power Supply: Describe the power that is required to operate the instrument. EX: 12-45 volts dc
Load Limitation: The maximum loop resistance that can be present with a specific power supply.
Indication: Addresses the use of indicating meters.
Hazardous Locations: Describes the types of hazardous location that the instrument is certified for
use within.
e.g.:

Class 1, Div 1 and 2, Groups B,C

Damping: A discussion involving time constants will be useful before defining damping. A time
constant is the time required for an instruments output to complete 63.2% of a total input step
change. Time constants are designated by the Greek letter tau () and defined by the equation 1-ne
where:
e = natural logarithmic base (2.718)

n = number of time constants

The Table summarises the present changes for a duration up to 5 time constants.
Time constant ()
1
2
3
4
5

Equation
1- e
1- 2e
1- 3e
1- 4e
1- 5e

% Output change
63.2%
86.5%
95.0%
98.2%
99.3%

Since the logarithmic function never reaches 100%, it is common to use 4 or 5 time constants to
approximate the total response of the instrument. This total response is commonly referred to as
Response Time. For example, if an instrument has a 0.2 second time constant, total response time
can be calculated by multiplying the time constant (0.2 sec) by the number of desired time constants
(e.g. 4 or 5).
Damping is the ability to electronically adjust an instruments time constant.
Turn-on Time: The maximum time required, during power-up, which allows instrument operation.
B. Performance Specifications
Temperature Effect
The temperature effect, unless otherwise stated, is assumed to include both the zero error and the
total effect. It can be expressed a number of ways, i.e.,
0.15% of span / 100F
1.0% of max. range/100F between 50F and 150F
0.01 F/F between -15 and 185F
Overpressure Effect
On a differential pressure transmitter, overpressure may be either on the high side or the low side.
It can be expressed a number of ways, i.e.,

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7 - 15

0.25% of upper range limit for 2000 psi


1.0% of calibrated span for 500 psi
Note: The importance of reasonably high overpressure capability on differential pressure transmitters
provides added assurance that improper sequencing of the three-valve manifold will lessen the
chance that the unit will be damaged or even need recalibration.
Static Pressure Effect
Static pressure effect is the affect of line pressure applied to both the high and low sides of a
transmitter. It involves both zero and span errors.
Zero errors can be easily corrected by simply rezeroing the transmitter at its operating pressure.
Span errors, however, are a different situation.
The determination of span errors requires expensive, sophisticated equipment. For this reason
most manufacturers make no statement about this effect, making it difficult to make appropriate
corrections. Ideally, when specified, span errors should be systematic and therefore easy to correct.
In this case corrections can be made by calculating the effect and compensating for it during
calibration, or by placing a simple algorithm in a computer-based receiver.
Vibration Effect
Most specifications refer to sinusoidal vibration, which may or may not be the type exhibited in
service.
C. Mean Time Between Failure
It is becoming common for companies to quote Mean Time Between Failure for product as an
indication of the products reliability. In particular, MTBF is often promoted as a sales feature of
new product, in order to gain the confidence of the buyer.
This is a valid approach, providing that all parties are aware of what is meant by MTBF.
MTBF is a term that came out of the aerospace industry. Because the aerospace industry uses
systems of great complexity, in highly critical applications, a means had to be developed to measure
what the expected reliability (life cycle) of a product or system would be.
The method of calculating the MTBF of a product is based on the failure rate of each and every
component in the design of the product. (Failure rates are given for such things as transistors,
resistors, ICs, solder, and circuit boards).
These values have been accumulated over the past 40 years, tabulated, and continuously revised for
use by the industry in general. The values are then run through a rather involved formula and the
result of this calculation gives the expected failure of the product.
The MTBF calculation gives a statistically accurate indication of the product reliability, but it also
included some basic assumptions, Some of the assumptions included in the calculation are:
1.

All piece parts operate perfectly at time of assembly.

2.

The product is properly designed.

3.

There are no assembly errors

4.

The product is used in the environment for which it was designed.

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 16

5.

Accuracy and Relevant Effects

The product is used continuously (i.e. NOT 8-5 Monday to Friday).

As you can see, this calculation is a tool for design and reliability engineers to pinpoint a weak area
of design, not an implicit guarantee of any type.
D. Sigma Conformance
Some manufacturers are now stating that their instruments are manufactured to a certain sigma ()
conformance. This means that the manufacturer uses data drawn from a small sample to represent
the complete population using a normal distribution curve to determine the standard Deviation
from specification. This is a means of predicting instrument errors and is vital in setting specifications.
The area under the normal distribution curve is used to determine the probabilty of picking an
instrument with a given error from the population of instruments manufactured. For example, a
manufacturer specifies a product to have a maximum error of 0.75%. If the manufacturer designs to
a three sigma deviation, then 99.73% of the instruments they manufacture would meet or exceed
the 0.75% specification. In turn, this implies that 95.45% (two sigma deviations) of the manufactured
instruments would meet or exceed an error of 0.50%. Finally, the area under the one sigma deviation
represents the 68.27% of instruments with errors less than or equal to 0.25%. Therefore, to publish
a specification based on a three sigma standard deviation ensures that 99.73% of the instruments
manufactured meet or exceed the published specifications.

Fig. 7.12

Spec.Limit

Defective ppm

1 Sigma

68.27

317300

2 Sigma

95.45

45500

3 Sigma

99.73

2700

4 Sigma

99.9937

5 Sigma

99.999943

0.57

6 Sigma

99.9999998

0.002

Principles of Instrumentation and Control

63

Course 9050 - October 1996

Accuracy and Relevant Effects

7 - 17

Case Study

Will digital Communications Improve Loop Accuracy?


Choosing the Right Instrument
Generally, smart instruments are most suited to measurements requiring communication capabilities
such as hazardous or inaccessible locations, to batch processes that require frequent rearranging, to
critical loops that cant afford much downtime, and to applications in which diagnostics speed
repair. They are also valuable where high performance is required, such as in custody transfer,
material balance, internal metering, and settings with wide ambient temperature fluctuations. In
these applications, smart instruments increase system accuracy and reduce maintenance time by
virtue of their continuous diagnostics and easy data access.
Smart transmitters can send the process variable
(PV) over the 4-20 mA or the digital link.
Because the 4-20 mA link requires the PV to
go through several analog components, the PV
signal can be degraded slightly. The Digitalto-Analog (D/A) converter in the transmitter,
the dropping register, and the Analog-to-Digital
(A/D) converter in the control system are not
perfect.
With digital communications, the D/A and A/D
are replaced with modems, and more
importantly, a software protocol. While these
digital replacements do not introduce the same
kind of errors as the analog components, they
do introduce errors. The time required to
assemble the digital message in the transmitter,
Fig. 7.13
and the time required to decode the message in
the control system add significant time delays.
These delays are major sources of error in all but the slowest of loops. 4-20 mA devices do not
introduce such errors because the D/A and A/D are
extremely fast. While they may have minor flaws, not
one is so major as the time-delay errors. It is not possible
to compensate for these.
Example
The flow through a pipe varies by 5 percent in 10
seconds. Process improvement requires that a control
loop be added to eliminate the oscillation.
A/D Conversions:

While the sensor signal is

Course 9050 - October 1996

Fig. 7.14

Principles of Instrumentation and Control

7 - 18

Accuracy and Relevant Effects

responding continually as the process changes, the signal is converted to a


digital value only at discrete intervals in time. The length of time between
conversions from the analog sensor signal to the digital signal is the A/D
Conversion Time.
Signal Processing:

The digitalised signal from the sensor is corrected for non-linearity and
temperature effects. The length of time for the microprocessor to perform
the polynomial correction is the Signal Processing Time.

Protocol Processing:

While an analog value is transmitted instantly, a digital output must be


assembled into a message. The maximum length of time that can transpire
before the digital message is transmitted is the Protocol Processing Time.

Fig. 7.15
Update Time
Definition: How frequently an output is provided by the transmitter.
The joint effect of A/D and P determine the Update Time.

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7 - 19

This effects the frequency of sampling the process in the measurement situation.

Fig. 7.16

This in turn affects the Resolution of the Digital Instrument.

Resolution
Definition:

The Maximum capabilities of a system used to faithfully convert A/D. It is


expressed in bits per word.

The lower the % resolution, the


more faithful is the input
waveform representation. This
corresponds to a high sampling
time through fast update times.
This all gets back to the quality of
the electronic componentry and
design.

Fig. 7.17

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 20

Accuracy and Relevant Effects

Deadtime
Definition: The time between the initial input change and the output response.
Example: Shows the effect of dead time when considering a step input change in pressure.

Fig. 7.18
The following graphs show how transmitter deadtime destroys measurement accuracy. It manifests
itself as a lateral shift on the time axis.

Fig. 7.19
This lateral shift of the graph represents the age of the process measurement since sampled.
Digital Sampling times should ideally be integral multiples of the Input measurement frequency. If
not, Beat Frequency Error or Steady State Error can result.

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7 - 21

Input/Output Relationship
fs

sampling frequency

fi

input frequency

The faster the sampling frequency, the more closely the output will follow the input.
For example:
If we have a sine wave pressure input cycling at 2 Hz (2 cycles per second), and a sampling rate of
1 Hz (1 cycle per second), then the output will not follow the input as accurately as if the sampling
and processing rate was 4 Hz (4 cycles per second).

Fig. 7.21
Fig. 7.20
STEADY STATE ERROR
If the sampling rate equals the input rate, a steady state error will result.
The following example illustrates worst case. If the sampling point was lower on the curve,
obviously the error would be less.

Fig. 7.22
Are we saying that the digital output cant be used to control a process? No, it can be used on slow
loops such as temperature or tank level, but unless the loop is lethargic, the analog output will allow
better control of the process.

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 22

Accuracy and Relevant Effects

Summary
Accuracy may be quoted as a percentage of a specified range or referenced against a single
measurement on its scale (point accuracy).
Range terminology includes: URL, LRL, URV, LRV and Span. Span is the actual, calibrated
measurement limits of a specific device (URV - LRV).
Errors may be classified as:
1.

Zero - fixed offset between true and measured value.

2.

Span - difference between calibrated and ideal span.

3.

Total Error - zero and span errors.

4.

Systematic - predictable, repeatable error. Examples include temperature and static pressure
effects.

5.

Turndown - arises when a span less than the instruments maximum is used. The smaller the
calibrated span, the greater the errors over that span.

When calibration of an instrument is required, the important considerations are:


1.

Precision - inherent agreement of multiple measurements.

2.

Sensitivity - the size of input to register an instrument output.

3.

Rangeability - limits upon the instrument spans wherein errors are of an acceptable value.

4.

Linearity - the deviation of actual instrument output from ideal linear calibration.

5.

Hysteresis - the source of differences in measurement for the same input.

6.

Repeatability - Deviation in measurements for same input approaching from one direction.

7.

Reproducibility - similar to repeatability but approaching from both directions.

8.

Dead Band - range of input which doesnt produce an instrument output.

Errors are best calculated by Root Mean Square method:


(System Error)2 = (Error Component #1)2 + (Error Component #2)2..
Calculations must also include the effects of relative instrument gains.
Mean Time Between Failure (MTBF) is a guide for systems design rather than immediately useful
at the users level.
Accuracy is also dependent upon the operating environment of the instrument. These are listed as
functional specifications.
The damping time constant is a measure of an instruments output to reach 63.2% of maximum.
When comparing the claims of different instrument manufacturers, make sure equivalent assessments
of accuracies are correlated.

Principles of Instrumentation and Control

Course 9050 - October 1996

Accuracy and Relevant Effects

7 - 23

Discussion
a.

From the point of view of accuracy, compare the relative merits of analogue and digital
instruments.

b.

To what extent is accuracy important in your plant process?

c.

Indicate any industry trends towards accuracy of which you are aware.

Course 9050 - October 1996

Principles of Instrumentation and Control

7 - 24

Accuracy and Relevant Effects

Test
1.

Define the following terms:


Zero Error _______________________________________________________________
________________________________________________________________________
Temperature Effect ________________________________________________________
________________________________________________________________________
Turndown Error ___________________________________________________________
________________________________________________________________________
Repeatability _____________________________________________________________
________________________________________________________________________
Terminal-Based Linearity ___________________________________________________
________________________________________________________________________
Point Accuracy ___________________________________________________________
________________________________________________________________________
MTBF __________________________________________________________________
________________________________________________________________________

2.

A measurement is subjected to 3 different sources of error contributing 0.3%, 0.4%, 0.5%


respectively. What is the:
A.
B.

worst case error range


more probable error range
A

3.

A control loop consists of 3 instruments each having a load of 250 ohms. For successful
operation over 4 - 20 mA, what is the minimum power supply required? Assume the transmitter
has a 12V lift-off.
___________________________________________________

Third Printing: October 1996


Second Printing: December 1993
First printed: October 1991
Principles of Instrumentation and Control

Course 9050 - October 1996

You might also like