You are on page 1of 60

Real-time frequency estimation of

analog encoder outputs


A Bachelor Thesis

P.J.H. Maas
DCT 2008.081

Engineering Thesis Committee:


Prof. Dr. Ir. M. Steinbuch (supervisor)
Dr. Ir. M.J.G. van de Molengraft (coach)
Ir. R.J.E. Merry (coach)

Eindhoven University of Technology


Department of Mechanical Engineering
Control Systems Technology Group

Eindhoven, June 8, 2008

Summary
In motion control applications, optical incremental encoders are used to obtain position and
velocity information. Instead of using the zero transition data, which is commonly done and
introduces an error of half an encoder count, the analog waveform output of the SinCos encoder
can be used. The analog waveforms are not perfect sinusoidal.
The Heydemann model [1] compensates for amplitude, phase and offset errors. The waveforms have a sawtooth like form. When the fundamental frequency of this waveform is known,
the sinusoid can be reconstructed.
To obtain a fundamental frequency estimation in real-time, an algorithm has been developed,
based on a literature study. This algorithm makes use of a least square fit. A number of sinusoids with different frequencies is fitted on the encoder signal. The sinusoid with the least
squared error has a frequency close to the frequency of the encoder signal. Because the dip in
the error has a width of at least fs /N in Hz, with fs the sampling frequency and N the sample
length, not all frequencies have to be considered, enhancing calculation time. In between the
considered frequencies a parabolic interpolation is purposed. Furthermore an off the shelve continuous wavelet transform method has been considered.
The frequency of the sawtooth like signal is assumed to differ from 50 to 1000 Hz. The sample
length, N , used is 125 samples. This is 12.5 ms at a sampling frequency of 10 kHz.
The frequency of the signal can be estimated with a systematic error of 2 % at low frequencies and a negligible systematic error at high frequencies. The stochastic error, the standard
deviation on the measurements, is below 0.1 Hz for all frequencies. The accuracy can be further
enhanced by choosing more sinusoids with different frequencies to compare to the signal. This
increases the calculation time also.
The least squares frequency estimation algorithm has been tested in simulations and on a measurement setup. The results are compared to the results of the wavelet transform method.
Concluded from these experiments is that the wavelet transform has a lower stochastic error
and a better low frequency (below 50 Hz) performance. The least squares fitting algorithm, on
the other hand, has a better performance when the frequency is changing at low frequencies (for
the encoder this means that the speed is changing). This is because the frequency estimation can
be done on just 0.6 period of a sample instead of a full period, needed in the wavelet transform.

Samenvatting
In bewegende geregelde systemen worden incrementele encoders gebruikt om positie- en snelheidsinformatie te verkrijgen. Normaal gesproken wordt gebruik gemaakt van de nul doorgang
informatie. Dat introduceerd een fout van een halve streepafstand. Er kan ook gebruik gemaakt
worden van de analoge encoder uitgang van een SinCos encoder. Deze analoge signalen zijn niet
helemaal sinus-vormig.
Het Heydemann model [1] compenseert voor fouten in amplitude, fase en evenwichtsstand.
The signalen hebben een zaagtandachtige vorm. Als de fundamentele frequentie van dit signaal
bekend is, kan de sinusoide gereconstrueerd worden.
Om een schatting van de fundamentele frequentie in real-time te verkrijgen, is een algoritme
ontwikkeld, gebaseerd op een literatuur studie. Dit algoritme is maakt gebruik van een geminimaliseerde kleinste kwadraten fout. Een aantal sinusoiden met verschillende frequenties wordt
gefit op het encoder signaal. De sinusoide met de kleinste kwadratische fout heeft een frequentie
dichtbij de frequentie van het encoder signaal. Omdat de dip in de fout een breedte heeft van
ten minste fs /N in Hz, met fs de sampling frequentie en N de sample lengte, hoeven niet alle
frequenties geprobeerd te worden, wat de rekentijd verbeterd. Tussen de geprobeerde frequenties
wordt een parabolische interpolatie gebruikt. Verder is een bestaande continue wavelet transformatie methode bekeken.
De frequentie van de zaagtand-vormige signalen wordt aangenomen tussen 50 en 1000 Hz. The
sample lengte, N , die gebruikt wordt is 125 samples. Dit is 12.5 ms bij een sampling frequentie
van 10 kHz.
De frequentie van het signaal kan geschat worden met een systematische fout van 2 % bij lage
frequenties en met een verwaarloosbare systematische fout bij hoge frequenties. De toevallige
afwijking, de standaard deviatie van de metingen, is onder de 0.1 Hz voor alle frequenties. De
nauwkeurigheid kan verder worden verbeterd door meer sinusoiden met verschillende frequenties
te vergelijken met het signaal. Hierdoor neemt de rekentijd ook toe.
Het kleinste kwadraten algoritme voor frequentie schatting is getest in simulaties en op een
meetopstelling. De resultaten zijn vergeleken met de resultaten van de wavelet transformatie.
Uit deze experimenten wordt geconcludeerd dat de wavelet transformatie een lagere toevallige afwijking heeft en een betere prestatie bij lage frequenties (onder de 50 Hz). Het kleinste
kwadraten algoritme heeft daarentegen een betere prestatie als de frequentie veranderd voor
lage frequenties (voor de encoder betekend dit dat de snelheid veranderd). Dit komt doordat de
frequentie schatting gedaan kan worden op slechts 0.6 periode van het sample, in plaats van een
volledige periode, die bij de wavelet transformatie gebruikt wordt.

Contents
Summary

Samenvatting

1 Introduction
1.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Report outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9
9
10

2 Specifications and the signal


2.1 Measurement setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 A sawtooth signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11
11
11
12

3 Literature study
3.1 The methods . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 A comparative study and wavelet methods . . .
3.1.2 Real-time frequency estimation in power systems
3.1.3 Real-time frequency estimation in sound signals .
3.2 Comparison . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

15
15
15
16
17
18

least square algorithm


The least square algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Notes on Matlab implementation . . . . . . . . . . . . . . . . . . . . . . . . . . .

21
21
23
24

4 The
4.1
4.2
4.3

5 Simulation
5.1 Simulation setup . . . . . . . . . .
5.2 Determination of the sample length
5.3 Validation . . . . . . . . . . . . . .
5.4 Other signals . . . . . . . . . . . .
5.4.1 A simple sinus . . . . . . .
5.4.2 A square signal . . . . . . .
5.4.3 A saw wave . . . . . . . . .
5.4.4 A triangle wave with offset

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

25
25
26
28
29
29
30
30
31

6 Comparison with a wavelet transform method


6.1 The simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 The results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33
33
34
34

. . . . . . . .
and accuracy
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

8
7 Implementation
7.1 The algorithm and the setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 The results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37
37
38
39

8 Conclusion and recommendations


8.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41
41
42

Bibliography

42

A Solution for a and b

45

B The width of a trough

47

C The frequency estimation program

49

D The Simulink block scheme

53

E The interpolation subsystem

55

F The frequency estimation program at 1 kHz sampling frequency

57

G Simulation results

59

Chapter 1

Introduction
In many motion control applications, optical incremental encoders are used to obtain position
and velocity information. The working principle of optical incremental encoders is shown in
Fig. 1.1. A light source is placed above a rotating encoder disk. The light through the encoder
disk is captured on a quadrature light detector which transforms the input into two analog wave
forms. The direction of motion and rotary position of the encoders can be retrieved from the
wave forms.
Light
source

Rotating
encoder
disk

Analog
outputs
Quadrature
light detector

Figure 1.1: Optical incremental encoder principle


Commonly, the rotary position is obtained by counting zero transitions of the analog waveforms.
By only using the zero transitions, an error in the position measurement of maximum half an
encoder count is introduced. Ideally, the two analog waveforms have a sinusoidal shape and are
in quadrature, so they have a phase shift of 90 degrees with respect to each other. These analog
waveforms can be used to directly derive the position and velocity information.
Due to encoder errors, the analog outputs of the waveforms are not perfect sinusoidal. The
offset, phase and amplitude errors can be compensated for using the Heydemann model [1]. The
measured analog outputs of the encoder have a sawtooth like shape rather than a sinusoidal
shape. The Heydemann model does not compensate for this.
The sawtooth like signals after the Heydemann correction have a fundamental frequency.
When this frequency is known, the ideal sinusoidal signal can be reconstructed. The frequency
is changing with changing velocity. The sinusoids have the same fundamental frequency, so the
estimation has only to be done on one of these signals.

1.1

Problem definition

This report is about finding a method for real-time frequency estimation. The signals for which
the frequency has to be estimated come from a SinCos encoder and have a offset, phase and
9

10

CHAPTER 1. INTRODUCTION

amplitude correction correction already. They are sawtooth like. The estimation method is
developed for these kind of signals in the first place, but it may be use full for other signals also.

1.2

Report outline

First a literature study is done in Chapter 3 to investigate the available methods of real-time
frequency estimation. Based on the findings of the literature study, a Least Square Fitting (LSF)
algorithm is developed to estimate the fundamental frequency in real time. This is described in
Chapter 4. The algorithm is optimized and validated by means of simulation and experiments
in Chapters 5 and 7, respectively. It will also be tested on other kind of signals. In Chapter 6 a
comparison between the LSF algorithm and an existing Continues Wavelet Transform (CWT)
algorithm is made.

Chapter 2

Specifications and the signal


The signal output of the SinCos encoder has a fundamental frequency of a limited range, depending of the velocity of the rotation. The real-time frequency estimator has to estimate the
fundamental frequency. In the following sections, the signal will be characterized and the requirements for the frequency estimator are specificated, but first the measurement setup will be
described.

2.1

Measurement setup

The encoder principle is shown in Fig. 1.1. In the measurement setup, this encoder disc, attached to a fly wheel, is rotated by a motor. A picture of the setup is shown in Fig. 2.1. This
motor is feedback controlled on the zero transition signal of an additional encoder with 5000
slits in a Simulink environment at a sampling frequency of 1 kHz. A simple PD controller is
used. The analog encoder data of a 100 slit encoder is also measured with a sampling frequency
of 1 kHz and can also be obtained in Simulink.

Figure 2.1: The measurement setup

2.2

A sawtooth signal

The measured output signal of the SinCos encoder, described in the introduction, is a sawtooth
like signal. This signal has a fundamental frequency, with with only odd harmonics. So there is
f0 , 3 f0 , 5 f0 , . . .. The amplitude decreases proportional to the inverse square of the harmonic
number. The spectrum of a sawtooth like signal is shown in Fig. 2.2. The higher harmonics in
11

12

CHAPTER 2. SPECIFICATIONS AND THE SIGNAL

the signal make the sharp peeks [2].

Figure 2.2: A 250 Hz sawtooth like signal with its spectrum


The real signal has less sharp peeks, and thus less important higher harmonics in its spectrum.
The real signal is shown in Fig. 2.3. This measurement was done at a sampling frequency of 1
kHz. The signal contains low noise influences. The signal does show an offset of -0.05 V. This
offset will be corrected by the Heydemann model.

Figure 2.3: A sample of encoder measurement data

2.3

Specifications

There are several requirements, which the frequency estimator must satisfy. First of all the
calculations must be done very fast. The system is assumed to be feedback controlled at a
frequency of 1000 Hz. This means that there is only 1 ms for A/D conversion, the Heydemann
correction, the frequency estimation, the position estimation, the actual feedback control and
the D/A conversion. Lets assume that half the time can be used for the frequency estimation.
That makes the upper bound for the calculation time 0.5 ms.
Secondly the frequency range is set from 50 to 1000 Hz. A 100 slit encoder is used. When
the encoder rotates at a minimum speed of 0.5 rotations per second, the frequency of the signal
will be
f = 0.5 100 = 50 Hz.
(2.1)
On the other hand, when the encoder rotates at 10 rotations per second, the frequency of the
signal will be
f = 10 100 = 1000 Hz.
(2.2)

2.3. SPECIFICATIONS

13

The assumption of a rotational speed between 0.5 and 10 rotations per second and the number
of slits on the encoder gives the frequency range from 50 to 1000 Hz.
This maximum frequency also implies a sampling frequency. To satisfy the Shannon theorem
[3], the sampling frequency should at least be twice the maximum frequency occurring in the
signal. In practice this frequency is chosen 10 times higher, So a sampling frequency of 10 kHz
will be used. So the large frequency range requires a high sampling frequency, that generates a
lot of data points.
Next there is the accuracy. The frequencies present in a stationary signal can be calculated
exactly by the fourier transform when the signal length goes to infinity [3]. The frequency resolution goes to zero then. The length of the signal segment used is rather short to exclude most
of the history, thus to only estimate the current frequency. This means that the frequency will
not be calculated exactly with a Fourier transform based method. There will need to be made
a trade off between history and accuracy. Furthermore the minimum signal length needed to
estimate the fundamental frequency is the fundamental period. At low frequencies there is a a
longer signal segment needed than at higher frequencies.

14

CHAPTER 2. SPECIFICATIONS AND THE SIGNAL

Chapter 3

Literature study
To estimate the frequency content of a signal, the Fourier transform is commonly used. The
Fourier transform is a stationary method though, which estimates frequency content without
saving the time information [9]. In real-time frequency estimation the time becomes important.
The frequency content changes with time when the speed of the encoder changes. At every
moment the frequency information should be obtained instantaneous. Real-time frequency estimation asks for a non-stationary frequency estimation method.
Real-time frequency estimation is a topic in several research fields. In electrical power systems it is an important parameter. Due to generator-load mismatches the main frequency can
change [4]. Frequency analysis of myoelectric signals has been used to determine local muscle
fatigue during sustained muscle contractions [5]. The frequency content of a signal is also important in speech and music recognition and manipulation and noise reduction [6], [7], [8]. The
methods used in these different application fields will be discussed in more detail in the next
sections.

3.1
3.1.1

The methods
A comparative study and wavelet methods

In a comparative study by Karlsson et al. [5], four time-frequency analysis methods are compared. The short time Fourier transform (STFT) applies the Fourier transform over a rectangular
windowed part of the signal. Within the window the signal is assumed stationary. When the
window is moved over the signal, the frequency content is determined for each time interval.
The Wigner-Ville distribution (WVD) can be interpreted as energy distribution method. The
Choi-Williams distribution (CWD) is a time-frequency distribution and an expansion of the
Wigner-Ville distribution.
The wavelet analysis calculates the correlation between the signal under consideration and
a wavelet function (t). This analyzing wavelet function (t) is referred to as the mother
wavelet. Every transformation method must satisfy the Heisenberg inequality, which states that
the product of time resolution t and frequency resolution f (bandwidth-time product) is
lower bound by (3.1).
1
tf
(3.1)
4
Whereas the STFT uses a fixed time-frequency resolution, the mother wavelet function (t) is
scaled by the scaling parameter s. This scaling changes the central frequency of the wavelet
and the window length. For different frequencies different time scales can be used. As a result
low frequency content can be analyzed on the needed large timescales with a high frequency
15

16

CHAPTER 3. LITERATURE STUDY

Figure 3.1: Constant resolution time-frequency plane


resolution, while high frequency content is analyzed with a smaller frequency resolution but in
a far shorter time interval. The wavelet transform still satisfies the Heisenberg inequality (3.1).
It is a multi resolution transform method. A graphical interpretation of the fixed time frequency resolution is shown in Fig. 3.1 and a graphical interpretation of the change of resolution
in a wavelet transform is shown in Fig. 3.2. The wavelet transform is extensively discussed in [9].

Figure 3.2: Multi resolution time-frequency plane


According to [5], the continuous wavelet transform (CWT) shows a better statistical performance than the other investigated analysis methods. These methods were not used in a real-time
analysis though.

3.1.2

Real-time frequency estimation in power systems

As mentioned before, many methods for real-time frequency estimation are used in electric power
systems. These methods include Pronys estimation [4], a Kalman filter [10], a wavelet approach
[11] and artificial neural networks [12]. These methods suffer from several drawbacks, as they
are described for their applications. Because they are used in a power system, they all assume to
have only higher harmonics with an amplitude of maximum 1 %. The heavy harmonic influences
are present in a sawtooth signal, though. Furthermore these methods assume a known nominal

3.1. THE METHODS

17

frequency. The frequency range of these methods is far too small. The wavelet transform shows
a very high accuracy, but is applied to a very short frequency range.
Artificial neural networks do not suffer from the above stated drawbacks but are not the preferred method because of their black box idea. These systems are very useful when a lot of
information is missing. The artificial neural network described in [13] uses Hopfield type feedback neural networks for real-time harmonic evaluation. The parallel processing provides high
computational speeds. Results described in this article look promising but the system is not
tested under high harmonic pollution; It is only tested in a power system.

3.1.3

Real-time frequency estimation in sound signals

In a feedback active noise control system real-time frequency estimation is used because the
frequency information is needed for the reference generator. This frequency estimator is based
on the adaptive notch filter (ANF) with constrained poles and zeros. It is described in [8].
The estimation of the notch coefficients is done by a linearized minimal parameter estimation
algorithm. The accuracy of this method is not very good in noisy environments.

Figure 3.3: Principle of fundamentalness


A fundamental frequency extraction method is described in [6]. It is based on the concept
of fundamentalness and is a wavelet based method. The fundamentalness is defined to have
maximum value when frequency and amplitude modulation magnitudes are minimum and it
has a monotonic relation with the modulation magnitudes. The concept of fundamentalness is
illustrated in Fig. 3.3. When no harmonic component is within the response area of the analyzing wavelet (a), fundamentalness provides the background noise level. When the fundamental
component is inside the response area, but not at the characteristic frequency of the analyzing
wavelet (b), fundamentalness is not very high, because of the low signal to noise ratio. When the
frequency of the fundamental component agrees with characteristic frequency of the analyzing
wavelet (c), the highest signal to noise ratio causes the fundamentalness to be maximized. When
the frequency of a higher harmonic component agrees with the characteristic frequency of the
wavelet, the fundamentalness is not very high because two or more harmonic components are
located within the response area, because of the filter shape design.
As a result, the fundamental frequency is obtained every 1 ms in a search range of 40 to 800
Hz in [6]. For signal to noise ratios of 20 dB and higher the fundamental frequency is obtained
with success.
A fundamental frequency estimation (FFE) algorithm based on a least square fit is proposed

18

CHAPTER 3. LITERATURE STUDY

method
wavelet [11]
adaptive notch [8]
prony est.[4]
neural networks [12]
neural networks [13]
least squares [7]
fundamentalness [6]
Kalman filter [10]

step resp.
+/-

+
+
+/-

trace
+/+
+
+
+

accuracy
+
+/+/+
+
+
+
+/-

range
+
+
+
+

harmonics
+
+
+
+
+
-

noise
+
+/+/+

+
+

calc. time
+
+
+
+
+
+/+

Table 3.1: Comparison of methods


in [7]. An error is calculated as a function of the frequency. This error shows a dip when the
sinusoid fitted on a signal segment has the same frequency as a frequency component of the
signal segment.
The least square fit has two crucial properties. One property is that the fundamental frequency shows the lowest squared error. The other defines the minimum width of the dip in the
squared error, referred to as troughs in [7], hence the error has not to be calculated for every
frequency. The error is only calculated for analyzing frequencies sufficiently far apart from each
other, enhancing computational efficiency.
The method successfully estimates frequencies in a range of 98 to 784 Hz samples, with a
sample length of less then 5 ms. The computation time is about 8 ms, measured on a 30 MIPS
(million instructions per second) processor. A pentium 4 2.2 Ghz processor does about 4000
MIPS, so with the current technology this should not be a problem.

3.2

Comparison

Based on the results presented in the articles studied, a comparison table is made. In this table
several criteria are judged. These criteria include:
step response: fast estimation of a suddenly changing frequency
tracing: fast estimation of a gradually changing frequency
accuracy: accuracy of the estimated frequency with respect to the reference frequency
range: frequency range of the estimation method
higher harmonics: possible negative influence on the performance by disturbance from higher
harmonics
white noise: possible negative influence on the performance by disturbance from noise
calculation time: for real-time frequency estimation the method should be fast
It should be noted that comparing the different results from the different articles is difficult
because they do not use the same test methods. Some methods are not tested at all criteria
throughout the articles. This makes the comparison a very rough one. The results from the
comparison are shown in Table 3.1.
The conclusion, based on the comparison, is that the method based on fundamentalness [6] and
the FFE algorithm based on a least square fit are the most useful [7]. The FFE algorithm is a
quite simple algorithm which is based on linear algebra, while the fundamentalness algorithm

3.2. COMPARISON

19

uses integrals and second order differentials. The fundamentalness needs advanced algorithms
for integration, an adaptive Simpson rule by example. These algorithms need extra calculation
time. This needs to be done for all channels, in the frequency range in which the wavelet works.
Because of simplicity and promising results from the paper, the fundamental frequency estimation (FFE) algorithm based on a least square fit is used for the real-time frequency estimation
of the encoder signals.

20

CHAPTER 3. LITERATURE STUDY

Chapter 4

The least square algorithm


In this chapter, the chosen Least Square Fit (LSF) algorithm, as described in [7], will be presented. This algorithm is used in a somewhat changed form. This will be discussed also. Then
there will be some notes on applicability of the algorithm. The algorithm is first written in a
Matlab script and after that build in the simulink environment. The script and the simulink
block scheme are included in appendices C, D and E.

4.1

The least square algorithm

The signal that is to be estimated from the discreet sawtooth signal segment coming from the
encoder can be described by (4.1).
x
(n) = a sin(n) + b cos(n) with n = 1, 2, . . . , N 1, N

(4.1)

In this signal = 2f /fs is the relative fundamental frequency. The fundamental frequency in
Hz is f and fs is the sampling frequency. This function has to be fitted on the real signal x(n).
The parameters a and b determine the amplitude and phase of x
(n), as they do in de Fourier
transform. The squared error is given by
e=

N
X

(
x(n) x(n))2 .

(4.2)

n=1

Eq. (4.2) is a function of a, b and and for variable minimal when


N

X
X
X
e
= 2a
sin(n) sin(n) + 2b
cos(n) sin(n) 2
x(n) sin(n) = 0
a
n=1

N
X

n=1
N
X

sin(n) sin(n) + b

n=1

cos(n) sin(n)

n=1

n=1
N
X

x(n) sin(n) = 0

(4.3)

(4.4)

n=1

aP + bQ + W

= 0

(4.5)

X
X
X
e
= 2a
sin(n) cos(n) + 2b
cos(n) cos(n) 2
x(n) cos(n) = 0
b

(4.6)

and
N

n=1

N
X
n=1

n=1
N
X

sin(n) cos(n) + b

cos(n) cos(n)

n=1

n=1
N
X

x(n) cos(n) = 0

(4.7)

aQ + bR + X = 0

(4.8)

n=1

21

22

CHAPTER 4. THE LEAST SQUARE ALGORITHM

with
P =

N
X

sin(n) sin(n)

(4.9)

cos(n) sin(n)

(4.10)

sin(n) cos(n)

(4.11)

n=1

Q=

N
X
n=1

R=

N
X
n=1

W =

N
X

x(n) sin(n)

(4.12)

x(n) cos(n) .

(4.13)

n=1

X=

N
X
n=1

The solution to this pair of equations, 4.5 and 4.8, is given by (see also appendix A):
a=
and
b=

QX RW
P R Q2

(4.14)

QW P X
.
P R Q2

(4.15)

With a and b known, the estimated signal, x


(n), is known and the squared error, e(), can
be calculated for each radial frequency . The result of such calculation is shown in Fig. 4.1.
This is a calculation of the error of a 250 Hz signal.

Figure 4.1: fitting error of a 250 Hz triangular wave


Choi states in [7] that this error function, e(), has two important properties:
1. Each significant trough in the function e() corresponds to a sinusoidal component of the
estimated signal segment, x(n). The value of of the minimum point of a though is equal
to the frequency of the corresponding component.
2. The width of each significant trough in the function e() is at least 2/N in radial frequency. So this width is independent of the frequencies of the sinusoidal components of
the input signal, provided that these are located sufficiently far apart from each other,
that is their frequencies differ more than 2/N in radial frequency. This can be seen in
Fig. 4.1.

4.2. ADAPTION

23

In [7] the method is used to estimate the fundamental frequency of musical signals. The most
important frequency component, i.e. with the highest amplitude, is the fundamental frequency
and the higher harmonics are integer multiples of the fundamental frequency. This is also the
case for sawtooth like signals, but then only the odd multiples are contained in the spectrum.
This makes the method as it is described in [7] useful for sawtooth like signals.
In music signals the fundamental frequency, 0 , must be higher then 2/N to avoid that
the troughs interfere. For a sawtooth signal, the next trough will have a frequency twice as
far, because the even multiples fall out of the spectrum. The fundamental frequency of the
sawtooth signal, 0 , should therefore be higher then only /N . Otherwise the next trough of
e(), corresponding to the next higher harmonic, falls within the fundamental trough. The
minimum fundamental frequency, possible to estimate without interference of the troughs, in
Hz is given by (4.16).
fs
f0
(4.16)
2N
The fundamental frequency component will only show the lowest error if it has also the highest
amplitude. An example for which this is nog true is shown in Fig. 4.2. In this figure a signal
with two sinusoids with a frequency of 250 Hz and 375 Hz respectively are added to each other.
The fundamental frequency of this signal is 125 Hz, but the lowest error is shown for both the
250 Hz and the 375 Hz component. These are not the fundamental frequencies.

Figure 4.2: The frequency estimation for which the fundamental frequency has not the lowest
error
The second property makes it necessary to compute e() only at values of evenly spaced
2/(3N ) apart. This makes sure that there will be three frequencies falling into a trough, where
the middle one is the lowest. It should be noted that when there are more frequencies in the
range tested, the accuracy becomes better. Furthermore the maximum width is proven in appendix B.

4.2

Adaption

The algorithm as described above is adapted to work with frequencies in Hz instead of relative
radial frequencies. In [7] the number of intervals is calculated by m = max /(2/(3N )), choosing
i = 2i/(3N ) with i = 1, 2, . . . , m. Substituting max = 2fmax /fs gives
m = 3N fmax /fs .

(4.17)

Note that the calculations are still done with relative frequencies, only the input is changed for
useability. The 3 in (4.17) is made a parameter, nppt (minimal Number of Points Per Trough),

24

CHAPTER 4. THE LEAST SQUARE ALGORITHM

so the accuracy can be enhanced when needed. Furthermore a small factor 1.2 is introduced to
have interpolation points after exactly 1 kHz. This leads to (4.18):
m = 1.2npptN fmax /fs .

(4.18)

The error function is now a function of the relative frequency, f , instead of the relative radial
frequency, .
When the sample used for the least square fitting procedure does not contain an integer
number of periods, the estimated frequency will not be the fundamental frequency. This happens
because only a limited number of frequencies are tested. The lowest frequency in the trough will
be near the fundamental frequency though. To enhance the frequency estimation a parabolic
interpolation is used to interpolate between the lowest and the two neighboring points, also in
the trough. The parabolic interpolation is done by linear regression [14]. This leads to the
parabolic coefficients y1 , y2 and y3 of (4.22):
e = y1 f 2 + y2 f + y3 .

(4.19)

The minimum of this interpolated trough is given by


d
e
= 2y1 f + y2 = 0
df
leading to
f0 =

4.3

y2
.
2y1

(4.20)

(4.21)

Notes on Matlab implementation

The algorithm has now two variables left. The sample length, N , needs to be optimized. For high
frequency signals, N should be low, such that there is not too much history in the estimation.
When the signal contains a low fundamental frequency, N needs to be high to cover enough of
the signal for a useful fit. If N is too high the matrices in the calculation become so large that
real-time calculation is not possible anymore. The optimization of N is done by simulations in
the next chapter. Furthermore, the accuracy can still be enhanced by calculating setting nppt.
To enhance the speed of calculation, most variables are calculated off-line. These variables
are needed in the algorithm. There are matrices s and c containing the sinusoids as a function
of and n. All values in de matrices which are a function of are saved horizontally, while all
values related to time, n, are saved vertically in the matrices:
s, c Rn .

(4.22)

The vectors P , Q and R are calculated and are horizontally saved, because they are a function
of only. Because the denominator in (4.14) and (4.15) is the same, a vector D = P R Q2 is
introduced.
When the algorithm was written, most calculations were programmed in loops. The Matlab
language is made to deal with matrices and calculations in loops are very slow. All loops are
therefore replaced by matrix calculations. Some vectors have to be expanded. This is done
multiplying a column of ones of the right dimensions with the row vector. Hence there are no
loops in the algorithm, enhancing the calculation time and making it working under real-time
conditions.
The resulting Matlab program is presented in appendix C.

Chapter 5

Simulation
There are two parameters left to determine. These are the sample length, N , and the number
of point per trough, nppt. These are determined by means of simulation. Also the interpolation
method can be further refined.
There will need to be made a trade off between accuracy and history by choosing the sample
length N , as described in Section 4.3. In the following sections, first the simulation setup
is explained. Then the sample length is determined and the number of points per trough
is optimized, to enhance the accuracy. Furthermore experiments are conducted with several
interpolation methods.
The algorithm is tested with the found settings and the results are presented in Section 5.3.
In Section 5.4 the algorithm is tested on other kind of signals.

5.1

Simulation setup

The simulation experiments are done in Matlab. Six performance indicators are calculated in
loops for several sample lengths and several signal frequencies. The signal used is an ideal
sawtooth signal, as shown in Fig. 2.2. These performance indicators are stored in a matrix.
The mean estimated frequency, fm , and the standard deviation, , are calculated. The measurements are assumed to be distributed according the student-T distribution. There are done
21 measurements, so the student T factor becomes 2.1. This means that a single measurement
will be within the range of the fm 2.1 with 95 % certainty [15].
The performance indicators are:
Mean frequency: The average frequency of the 21 measurements of estimated frequencies.
Standard deviation: The standard deviation of the 21 measurements of estimated frequencies.
This is a measure for the stochastic error.
Absolute error: The difference between the mean estimated frequency and the frequency of
the test signal. This is a measure for the systematic error.
Relative error: The absolute error divided by the frequency of the test signal. This is also a
measure for the systematic error.
Calculation Time: The mean calculation time. Although this is nog a representative value,
it gives an indication of the calculation time. This value is not representative because the
measurements are done in a not ideal computational environment.
Number of periods: The number of periods in the calculation is a measure for the signal
history included in the measurement.
25

26

CHAPTER 5. SIMULATION

When the sample length becomes too short for a low frequency signal, the frequency of this
signal cannot be estimated. The test frequency with the lowest error is then the first point of
e(f ). There is no point before this minimum point to conduct a parabolic interpolation with.
When this occurs, the element for the current sample length and signal frequency of a check
matrix, which is initially 1 for all tested sample lengths and frequencies, is set to infinity. After
all measurements, the performance indicator matrices are divided by this matrix, so the not
testable elements become zero. Therefore, at low frequencies with low sample lengths, there are
zeros in the performance indicator matrices.
The input ideal sawtooth signals do have some noise added. Because the real encoder signals
do not contain so much noise, there is only -40 dB noise added. The amplitude of the sawtooth
signal is 1.

5.2

Determination of the sample length and accuracy

The first set of simulations is done with a wide range of frequencies and sample lengths. The
number of points per trough is set to three, as is done in [7].
The frequencies for which the performance indicators are calculated are Fx =


50 75 100 150 200 300 400 500 750 1000 .
These frequencies are all within the range of 50 to 1000 Hz. This frequency vector is biased
to lower frequencies, because the method tents to perform worse at lower frequencies. This is
because there are fewer periods in these low frequency samples. The frequency is kept constant
during a test.
The sample lengths tested are N =


10 20 30 40 50 75 100 150 200 400 .
This sample length vector is biased to the lower sample lengths, because these are most welcome.
A lower sample length excludes more history; hence the frequency estimation will be better with
a changing fundamental frequency. The calculation time becomes shorter with low signal lengths.
Fig. 5.1 shows the systematic error for the different combinations of sample length and signal frequency. From this figure it is clear that when the number of periods in the sample is an
integer, the systematic error is high. This is caused by the interpolation. As indicated before,
the interpolation is needed when the signal frequency is not equal to a tested frequency.

Figure 5.1: relative error for 3 points per trough


The systematic error should be lower and this is done by choosing more frequencies within the
range: the minimum number of points per trough is set to five. Now another option becomes

5.2. DETERMINATION OF THE SAMPLE LENGTH AND ACCURACY

27

available: with a minimum of five points per trough it is possible to estimate the fundamental
frequency by an interpolation with two points a each side of the minimum. A parabolic function is then least square fitted on five points. The systematic error increases though, when the
interpolation is done on five points.

Figure 5.2: relative error for 5 points per trough


In appendix G the performance is listed in tables for a five points per trough and three interpolation points setting. Fig. 5.2 shows the systematic error for these settings. The systematic
error is now below 3.5 % at a sample length of 125. This seems to be a sample length for which
the accuracy is alright and the performance is further analyzed around this sample length value.
The sample length is changed from 80 to 180 in steps of 5. The signal frequency is changed from
25 to 1100 in steps of 25. In Fig. 5.3 and Fig. 5.4 are the systematic and stochastic error shown
respectively.
From these figures the a sample length of 125 seems to a reasonable trade off between
accuracy and history. Also, the matrices are kept small enough to obtain a high calculation
speed. The history of signal needed for the frequency estimation is then:
t =

N
125
=
= 0.0125 s.
fs
10000

(5.1)

At a low signal frequency of 50 Hz, this means that there are 0.625 periods involved. At a high
signal frequency of 1000 Hz, this means there are 12.5 periods involved.
The number of periods is calculated by (5.2), with fx the signals fundamental frequency.
per =

N
fx
fs

Figure 5.3: relative error for five points per trough around N = 125

(5.2)

28

CHAPTER 5. SIMULATION

Figure 5.4: stochastic error for five points per trough around N = 125
By these simulations, a value of 125 is obtained for the sample length, N . Furthermore
the accuracy is enhanced by choosing a minimum of five points per trough. The interpolation
method is a parabolic interpolation on three points. According to (4.16) the minimum frequency
to be estimated without the troughs interfering is 40 Hz, so f0 40 Hz. Note that interference
for a sawtooth signal does not have so much influence, since the higher harmonics do not have
a high amplitude.
The simulations have also shown a calculation time of 0.6 ms. This is a measure under bad
conditions, in which Matlab does not only calculate the frequency but is also doing the loop
maintenance. These loops are only introduced for recording the simulation results. The 0.6 ms
calculation time only shows that the 125 sample long signal segments are not too long, with
too big matrices as a consequence. There are several options to decrease the calculation time.
These options include:
A faster computer: These tests are done on a 1.86 Ghz Pentium M processor. There are
faster processors available. Tests on a 3.2 Ghz Pentium 4 processor showed a calculation
time of 0.4 0.1 ms calculation times.
A different operating system: These tests were done in Windows. Calculation in by example Linux will have shorter calculation times.
C code: Reprogramming the algorithm in Matlab embedded C-code should decrease the calculation time significantly.
There is just 12.5 ms of signal history used for the frequency estimation. This becomes important for a changing velocity of the encoder, resulting in a changing frequency.

5.3

Validation

The algorithm is now complete and optimized. For validation, the relative error and standard
deviation, respectively measures for systematic and stochastic error, are analyzed. A wide range
of input frequencies is tested. The input frequencies are biased to the lower frequencies, because
the the performance is lower there. There are done 21 measurements per frequency, as described
in Section 5.1. There was added -40 dB of white noise to the signal, to simulate a realistic
encoder signal. The results are shown in Fig. 5.5 and Fig. 5.6.
At low frequencies the relative error peeks to just under 2 % but at higher frequencies, the
relative error is very low. The maximum of the stochastic error is about 0.1 Hz, independent of
the signal frequency. This means that the frequency estimated will be within the range of the
fm 2.1 = fm 0.21 Hz with 95 % certainty. This can become important for low frequencies,
where 0.2 Hz is relative much. At low frequencies, the standard deviation tends to be a bit

5.4. OTHER SIGNALS

29

lower, though, as can be seen in Fig. 5.6.

Figure 5.5: relative error for final algorithm

Figure 5.6: stochastic error for final algorithm


The accuracy can be further enhanced by adding more points per trough. The test frequencies are completely free to choose in this algorithm, as long as there are enough to find a
minimum, thus at least three points per trough. The more frequencies are chosen, the longer the
calculation will take though. The frequencies can even be unevenly spaced, so there are more
points at lower frequencies, then there are at high frequencies, where the error tends to be lower.
This would improve the performance at low frequencies, without adding much calculation time.
When computers become faster and the program language becomes more efficient, the accuracy
can be further enhanced. Both the systematic and the stochastic error would become lower.

5.4

Other signals

In this section the fundamental frequency estimator will be used to estimate the fundamental
frequency of other then sawtooth like signals. These signals satisfy the requirement that the
fundamental frequency is the frequency component with the highest amplitude.

5.4.1

A simple sinus

The first signal that is tested is a simple 250 Hz sinus. This will probably work, because the
sawtooth like signals are sinusoids with odd higher harmonics. The result is shown in Fig. 5.7.
The error plot shows a trough at 250 Hz.

30

CHAPTER 5. SIMULATION

Figure 5.7: 250 Hz sinus signal and error plot

5.4.2

A square signal

A square signal has the same higher harmonics as the triangle wave: only the odd integer multiples of the fundamental frequency. These higher harmonics are present with a higher amplitude
though. The amplitude for the higher harmonics is proportional to inverse of the harmonic
number. The error plot, Fig. 5.8 shows this stronger harmonic influences by the small dip at
750 Hz.

Figure 5.8: 250 Hz square signal and error plot

5.4.3

A saw wave

The saw wave includes both the even and the odd multiples of the fundamental frequency. This
signal comes close to the signals analyzed in [7]. The error plot in Fig. 5.9 shows a large trough
at 250 Hz, the fundamental frequency, but also small dips at the higher harmonics, at 500 Hz
and 750 Hz.

5.4. OTHER SIGNALS

31

Figure 5.9: 250 Hz sawwave signal and error plot

5.4.4

A triangle wave with offset

An offset of 1 is applied to a 250 Hz triangle wave. This offset introduces a trough at a very low
frequency. The 250 Hz dip can still be seen in the error function, but the minimum is at the
very low frequency, which comes from the offset. The result is shown in Fig. 5.10. The method
is not useful for this kind of signal. It may be useful when the offset is filtered out first, like is
done for the SinCos encoder signals by the Heydemann model [1].

Figure 5.10: 250 Hz sawtooth signal with offset and error plot

32

CHAPTER 5. SIMULATION

Chapter 6

Comparison with a wavelet


transform method
A continuous wavelet transform (CWT) algorithm that was developed at the Eindhoven
versity of Technology is used to find the fundamental frequency of a sawtooth like signal.
method is compared to the least square fitting method (LSF), presented in this report.
simulations are done in a Simulink environment. The CWT algorithm takes the frequency
the highest amplitude in the spectrum and outputs that as the fundamental frequency.

6.1

UniThis
The
with

The simulation

First a sawtooth signal sample of 10 seconds with a changing frequency from 50 to 400 Hz is
made. Hence the input frequency is known. Two simulations are done, one without and one
with noise, for both methods. The noise is added as a random number with a variance of 0.01
and an average of 0.

Figure 6.1: least square method: a. without noise, b. with noise

33

34

6.2

CHAPTER 6. COMPARISON WITH A WAVELET TRANSFORM METHOD

The results

The frequency of this signal is then estimated by both the CWT algorithm and the LSF algorithm. The results are shown in Fig. 6.1 and Fig. 6.2.
There are two sources for errors. First there is the error on the frequency estimation. For
high frequencies, the frequency resolution for the CWT algorithm is lower and the frequency
estimation gets worse. The frequency estimation at low frequencies is less accurate for the LSF
algorithm. This is seen by the high error in Fig. 6.1 at low frequencies.
Furthermore there is the delay. The low time resolution of the CWT algorithm at low frequencies introduces an error, because changes in the fundamental frequency are seen quite late.
The LSF algorithm uses only 0.625 period for the frequency estimation at a frequency of 50 Hz.
Therefore, there is less delay, causing less error. The time-frequency resolutions of the CWT
method are illustrated in Fig. 3.2.
The CWT shows a startup error. This is probably because there is no full low frequency period,
so the highest amplitude found is a high frequency one. The spectrum is thus not complete yet
at that moment.

Figure 6.2: wavelet transform method: a. without noise, b. with noise

6.3

Conclusion

The results of these simulations show that the LSF method shows a higher stochastic error
then the CWT method for low frequency frequencies but a lower stochastic error for higher
frequencies.
Under noisy conditions the stochastic error is higher and the estimations get worse. The
wavelet method is less effected by noise. At higher frequencies, the stochastic error of the CWT
method becomes higher. This is because of the lower frequency resolution at high frequencies.
This frequency resolution is illustrated in Fig. 3.2.
On the other hand, the CWT shows more delay: the estimated frequency is behind the
current frequency, especially at low frequencies. This introduces a systematic error, because

6.3. CONCLUSION

35

the time resolution is lower. The LSF method uses a smaller sample size, just 0.625 period, so
the estimation suffers less delay. This enhances the accuracy, compared to the CWT algorithm,
when the fundamental frequency of the signal is changing.

36

CHAPTER 6. COMPARISON WITH A WAVELET TRANSFORM METHOD

Chapter 7

Implementation
The frequency estimation algorithm is implemented on the measurement setup to estimate the
frequency of the encoder output in real-time. In this chapter the results will be discussed. First
the algorithm is adapted to the current measurement setup.

7.1

The algorithm and the setup

Because the current measurement setup, as it is described in Section 2.1, only measures at a
sampling rate of 1 kHz, the algorithm is changed. Because the sampling frequency is a factor
10 lower, the maximum frequency to be estimated is also lowered a factor 10: fmax = 100 Hz.
The minimum frequency to estimate is assumed to be 10 Hz. The sample length, N , is set to
80. The number of periods at 10 Hz in the sample is then just 0.8 and at 100 Hz the number
of periods is 8. The lenght of the sample is 80 ms. This is quite long but there are less data
points per unit of time then at a sampling frequency of 10 kHz, so the sample length will need
to be longer to ensure a high accuracy. That is why 0.8 period is used instead of 0.6. Otherwise
the accuracy would be too low. For higher low frequency accuracy, the minimum number of
points per trough, nppt, is increased to 9. The changes to the algorithm settings are included
in appendix F.
The frequency estimation algorithm is connected to the analog output of the encoder in
Simulink. The Simulink block scheme is used, as it is provided in appendix D. The parabolic
interpolation subsystem is provided in appendix E.
The reference signal for the feedback control of the encoder disc is shown in Fig. 7.1. Both
the position and the velocity are shown. The velocity is what influences the frequency, while the
desired position is the reference for the feedback controller. The velocity is first constant, then
the encoder speeds up and kept at constant speed until it is slowed down again. The velocity
is then constant again, slows down until the encoder turns around and speeds up in the other
direction. When the encoder rotates the other way around, the velocity is kept constant again.
This reference shows thus a constant velocity, a changing velocity and even a turn around of
the encoder disc. That is where the frequency estimation is expected to fail, because when the
encoder does not turn, there is no frequency content in the signal.
The velocity is not known because the input of the controller design of the system is unknown.
The velocity is therefore tuned to give a minimum frequency of about 10 Hz and a maximum
frequency of 100 Hz. The profile is shown in Fig. 7.1.
Note that the frequency estimation will be done on the uncorrected waveforms. The Heydemann correction is not implemented yet. So the offset may distort the estimations.

37

38

CHAPTER 7. IMPLEMENTATION

Figure 7.1: position and velocity input

7.2

The results

The output of the real-time Simulink scheme is the estimated frequency. This frequency estimation of the least square fit (LSF) is saved. Additionally the waveform and the reference are
saved. The fundamental frequency of the waveform is then offline estimated by the continuous
wavelet transform (CWT) method also for comparison. The results of the frequency estimation
are shown in Fig. 7.2. The absolute value of the velocity of the reference is also shown in this
figure. The frequency profile should follow the velocity profile.

Figure 7.2: implemented frequency estimation results


The reference is not followed exactly. There are several reasons for this, including:
Control error: The controller does make an error. The velocity profile is thus not followed
exactly.
Coulomb friction: When the flywheel turns around, it stops for some time. Because it does
not start moving immediately, as the reference does, this introduces an error.
Resonance: A bit of resonating sound was heard at high velocities. Resonance increases the
error. This is probably because the PD controller blows up the high frequencies in the
steps of the encoder counts.
No Heydemann correction: The waveform is not the sawtooth like signal, because the Heydemann correction was not available yet. Especially the offset makes the frequency estimation by the proposed least square fitting difficult. The wavelet transform will have less
problems with this.

7.3. CONCLUSION

39

From the results can be seen that, at high frequencies, the reference is not followed completely.
This is probably because of the maximum speed of the motor. When the encoder is turned
around, at the end of the measurement (negative velocity), the frequency is expected to have
the same value as before the turn around. The frequency is actually lower. This was also seen
in the experiment, when the flywheel, after the turn around, rotated very slowly. This can be
because of friction effects or a bad friction feed forward.
This low frequency at the end is very badly estimated by the LSF method. The offset of the
waveform at this low frequency begins to become so important, that the minimum error can be
found lower then the fundamental frequency of the waveform. The wavelet transform method
shows a far better low frequency performance.
Both the wavelet transform and the least square fit show the same peeks around the reference
in the frequency estimation. The wavelet transform shows those only a bit later, because of the
lower time resolution. It uses a longer sample to measure the frequency from. The LSF shows
larger peeks around the reference, probably because of the higher stochastic error.
When the encoder turns around, the frequency becomes very low. Both the CWT and
LSF method cannot estimate the frequency anymore. The CWT method, with its better low
frequency performance, does this better then the LSF method, but also looses track of the frequency. Note that, when the velocity becomes zero, there is no frequency content anymore, so
the frequency of the waveform cannot be estimated. Furthermore, the choice of the frequencies
at which the least square fit is evaluated can enhance the low frequency performance.

7.3

Conclusion

To reconstruct the desired sinusoid from the measurement data, there are lots of errors to
compensate for. The Heydemann model corrects the offset, amplitude and phase errors in the
signal. Without this correction, the CWT shows far better results then the LSF method. The
low frequency performance of the CWT is also better. The LSF method shows a changing
frequency earlier then the CWT at low frequencies. This is because the low time resolution at
low frequencies. This can be seen in Fig. 3.2 and becomes important when there are changing
frequencies in the system. For an encoder this means a changing velocity.

40

CHAPTER 7. IMPLEMENTATION

Chapter 8

Conclusion and recommendations


8.1

Conclusion

To enhance the position measurements of a SinCos encoder, the analog waveforms can be used,
instead of the zero transition data, which is usually done. When the fundamental frequency of
these signals is known, the ideal sinusoid can be reconstructed.
The frequency of the signals is assumed to be in the range of 50 to 1000 Hz. The sampling
frequency needs to be at least 2 kHz, because of the Shannon theorem. In practice a sampling
frequency of 10 kHz is chosen.
Several real-time frequency estimation algorithms are considered. Most useful were a wavelet
transformation method, based on the concept of fundamentalness, and a least square fitting
algorithm. This is because they estimate the frequency in a great range and do this with a high
accuracy in the presents of higher harmonics.
The fundamental frequency estimation algorithm, based on a least square fit, is presented
in this report. An interpolation, based on linear regression, is added to enhance the accuracy.
Furthermore, the algorithm is adapted to frequencies in Hz, instead of relative frequencies. The
algorithm was written in Matlab code. For real-time implementation the algorithm was also
build in a Simulink environment. Calculation times where found to be 0.4 0.1 ms on a Pentium 4 3.2 Ghz processor in a matlab environment.
The algorithm was optimized for a reasonable accuracy and the calculation time is kept low,
to ensure real-time operation. The sample length on which the frequency estimation is done
was found to be 125 samples. The minimum number of points per trough in the error function
is 5. Then the accuracy is on a reasonable level. This is the case for the systematic and the
stochastic error. At 52 Hz the relative error peeks to 2 %, but for higher frequencies, the error
is well below 1 %. The standard deviation is well below 0.1 Hz for the whole frequency range
of 50 to 1000 Hz. The minimum fundamental frequency to be estimated without the troughs
interfering is 40 Hz, which is well below the minimum assumed frequency of 50 Hz.
The algorithm is also tested for other signals, and the fundamental frequency was successfully estimated for a sinusoidal, square and a saw wave signal. When the signal has an offset,
the fundamental frequency could not be estimated.
The least square algorithm was compared to a standard continuous wavelet transform algorithm. The stochastic error performance of the least square algorithm was found to be worse
then that of the continuous wavelet transform. The continuous wavelet transform, on the other
hand, showed a bigger systematic error, especially at low frequencies. This is due to the fact
that the wavelet transform uses a full period of the signal at low frequencies, while the least
square fitting algorithm uses only 0.625 period. This delay, resulting from a low time resolution
41

42

CHAPTER 8. CONCLUSION AND RECOMMENDATIONS

at low frequencies, results in an error, when the frequency to be estimated is changing.


These results were confirmed by the implementation in the measurement setup. The wavelet
transform shows a more robust low frequency estimation and has less problems with an offset
error.

8.2

Recommendations

The speed of calculation can be further enhanced by rewriting the algorithm in embedded Ccode. The algorithm can then be used more efficiently in a by example a Simulink environment.
As computers become faster or the algorithm gets implemented more efficient, the number
of points in the error function, e(f ), can be further increased. The points can even be spaced
non-linear. More points in a low frequency range, would greatly improve the accuracy in this
region, where the relative error is quite high now. The stochastic error would also decrease
significantly. That is most welcome, since the 1 Hz stochastic error has far more influence on a
50 Hz sample, then it has on a 1000 Hz sample.
If it can be done fast enough, the sample length, N , in the LSF method could me made
adaptive. When the sample length changes, also some matrices change. These need to be recalculated. When computers are fast enough and the program is efficient enough, it should be
possible to update these matrices online. Then a good frequency measurement can be found
with only 0.6 period for both high and low frequency signals.
When the velocity goes to zero, by example when the encoder turns around, the frequency
cannot be estimated anymore by the least square fitting algorithm. In a complete standstill
there is no frequency content actually. It should therefore be noted, that the position measurement with the analog SinCos encoder output can only be enhanced when the encoder is moving.
The least square fitting algorithm, as it was designed for the specifications, only works fine,
when the frequency is above 50 Hz. This low frequency malfunction may be improved by adding
more points in the error function at low frequencies.
The position in a standstill may be found by extrapolation of the previous positions. The
indication of low frequency malfunction may be that the minimum error is very high. This can
be seen in Fig. 5.10. A bound on the error could be investigated to have this indication of a too
low frequency.
If a real-time frequency estimation algorithm is needed and both a continuous wavelet transform
and a least square fitting algorithm are considered, the choice should be depending of the kind
of signal that has to be analyzed. If the frequency is changing fast, the delay time, which is
longer for a wavelet transform at low frequencies, could cause significant errors at low frequencies. These errors will be higher for the wavelet transform. The CWT method shows better low
frequency performance, although the LSF could be optimized for these low frequency signals.

Bibliography
[1] P. L. M. Heydemann. Determination and correction of quadrature fringe measurement errors
in interferometers. Applied Optics, 20(19):3382-3384, October 1981.
[2] Wikipedia, the free encyclopedia. Triangle wave.
URL: http://en.wikipedia.org/wiki/Triangle wave, May 2008.
[3] J.J. Kok and M.J.G. van de Molengraft. Signaal Analyse. Technical report, Eindhoven University of Technology, Department of Mechanical Engineering, 2003.
[4] T. Lobos and J. Rezmer. Real-Time Determination of Power System Frequency. IEEE Transactions on Intrumentation and Measurement, vol. 46, no. 4, pages 877881, August 1997.
[5] S. Karlsson, J. Yu and M. Akay. Time-Frequency Analysis of Myoelectric Signals During Dynamic Contractions: A Comparative Study. IEEE Transactions on Biomedical Engineering,
vol. 47, no. 2, pages 228238, February 2000.
[6] H. Kawahara, I. Mauda-Katsuse and A. de Cheveigne Restructuring speech representations
using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0
extraction: Possible role of a repetive structure in sounds. Speech Communication 27 (1999)
187-207, Elsevier Science, pages 187207, 1999
[7] A. Choi Real-Time Fundamental Frequency Estimation by Least-Square Fitting. IEEE Transactions on Speech and Audio Processing, vol. 5, no. 2, pages 201205, March 2000.
[8] S. Kim and Y. Park Active Control of Multi-Tonal Noise with Reference Generator Based
on On-line Frequency Estimation. Journal of Sound and Vibration, pages 647666, 1999
[9] R. Merry Wavelet theory and Applications, A literature study. Eindhoven University of Technology, Department of Mechanical Engineering, DCT nr. 2005.53, June 7, 2005.
[10] A. Routray, A. Kumar Pradhan and K. Prahallad Rao A Novel Kalman Filter for Frequency
Estimation of Distorted Signals in Power Systems. IEEE Transactions on Intrumentation and
Measurement, vol. 51, no. 3, pages 469479, June 2002.
[11] T. lin, M. Tsuji and E. Yamada A Wavelet Approach to Real Time Estimation of Power
System Frequency. SICE, pages 5865, July 2001.
[12] A. Cichocki and T. Lobos Artificial Neural Networks for Real-Time Estimation of Basic
Waveforms of Voltages and Currents. IEEE Power Industry Computer Application Conference, pages 357363, May 1993.
[13] L.L. Lai, C.T. Tse, W.L. Chan en A.T.P. So Real-Time Frequency and Harmonic Evaluation
using Artificial Neural Networks. IEEE Transactions on Power Delivery, vol. 14, no. 1, pages
5259, January 1999.
[14] B. Kolman and D.R. Hill Elementary Linear Algebra Pearson Education, Inc. eight edition,
2004, pages 276281. ISBN 0-13-121933-2
43

44

BIBLIOGRAPHY

[15] F.L.M. Delbressine, P.H.J. Schellekens, H. Haitjema and F.G.A. Homburg Metrologie voor
W Technical report, Eindhoven University of Technology, Department of Mechanical Engineering, 2006

Appendix A

Solution for a and b


aP + bQ + W = 0

(A.1)

aQ + bR + X = 0

(A.2)

Solve for a
aP + bQ + W = 0

(A.3)

aQ2
XQ
+ bQ +
=0
R
R

(A.4)

Subtracting (A.4) from (A.3)


aP + W

a(P

aQ2 XQ

=0
R
R

XQ
Q2
)=
W
R
R
XQ W R
P R Q2

a=

(A.5)

(A.6)

(A.7)

Solve for b
bQ2 W Q
+
=0
P
P
aQ + bR + X = 0

aQ +

(A.8)
(A.9)

Subtracting (A.8) from (A.9)


bR + X

b(R

bQ2 W Q

=0
P
P

Q2
WQ
)=
X
P
P

b=

W Q XP
P R Q2

45

(A.10)

(A.11)

(A.12)

46

APPENDIX A. SOLUTION FOR A AND B

Appendix B

The width of a trough


The Fourier series of a arbitrary signal is

X
x(t) =
[ak cos(k0 t) + bk sin(k0 t]

(B.1)

k=0

with 0 the minimal frequency measurable in radians per second, provided that the measurement contains exactly one period or an integer number of full periods of this frequency
component.
0 =

2
T0

(B.2)

From this the fundamental frequency in [Hz] can be calculated by (B.3), which is also equal
to the sampling frequency divided by the sample length.
f0 =

1
fs
=
T0
N

(B.3)

So the spectrum, (B.1), is made by the frequencies of the infinite series


f0 , 2 f0 , 3 f0 , . . . .

(B.4)

The error function can only go to zero for a frequency in this series. Only these frequencies
are in the signal after all. These frequencies are spaced a distance of f0 from each other. That
means that a trough can be formed only between f f0 . The relative frequency of this spacing
is
f0,rel =

f0
1
=
fs
N

(B.5)

2
.
N

(B.6)

or in radial frequencies
0,rel =
The maximum width of the trough is 2 f0 =

47

2
N.

48

APPENDIX B. THE WIDTH OF A TROUGH

Suppose a sinusoidal signal with only one frequency in the spectrum, then the fitting error
is low around this frequency and high for all other frequencies. The error gets lower more near
the frequency of the signal. The next multiplie of f0 cannot give a response, because the signal
is entirely different. This is shown in Fig. B.1. In this figure the error for a 250 Hz signal is
shown, calculated with a signal length of N = 125 and a sampling frequency, fs = 10 kHz. That
makes f0 = 80 Hz. The trough goes from approximately 170 Hz to 330 Hz, which is 160 Hz.

Figure B.1: error for a 250 Hz signal

Appendix C

The frequency estimation program


%
%
%
%
%
%
%
%
%
%
%
%
%
%

% % % %
% % % %
% % % %
% % % %
% % % %
version
version
version
version
version
version
version
version
version

Least square algorithm for estimating the fundamental frequency


author: P.J.H. Maas
date: 16th of april 2008
version: 1.0
version history
0.1: working with forloops, plotting w-e
0.2: working with frequencies instead of relative frequencies
0.3: matrix calculations instead of forloops for speed
0.4: noise added
0.5: parabolic interpolation
0.6: range and sampling frequency specified
0.7: replaced repmat functions by matrix calculations
0.8: making nppt a accuracy variable
1.0: final

% clean up
clc
close all
clear all
% generate sawtooth signal with frequency fx and sampling frequency fs
fx = 250;
% sawtooth frequency
fs = 10*10^3;
% sampling frequency
fmax = fs/10;
% max frequency to be estimated
nppt = 5;
% number of samples per flank of the trough
N = 125;
% number of used samples (1 periode: fs/fx)
%
t
%
x

introducing a test signal


= 0:1/fs:1;
x includes a little bit of noise as the real signals arent that noisy
= sawtooth(2*pi*fx.*t+1/2*4/7*pi,1/2) + wgn(length(t),1,-40);

%
%
n
%
m

algorithm parameters
sample numbers (time)
= (1:N);
range made (factor 2 when N is adaptive, a small factor >1 for other N)
= round(1.2*fmax/fs/(1/(nppt*N)));
49

50

APPENDIX C. THE FREQUENCY ESTIMATION PROGRAM

mi = 1:m;
f = mi/(nppt*N); % relative frequency (f/fs) vector
M = length(f);
% making repmat matrices
oneN = ones(N,1);
oneM = ones(1,M);
% making vectors offline, preallocating memory
W = zeros(1,M);
X = zeros(1,M);
A = zeros(N,M);
B = zeros(N,M);
xest = zeros(N,M);
xM = zeros(N,M);
e = zeros(1,M);

% frequency estimation
% make sin and cos vector for all omega, used as database matrices (N x M)
s = sin(2*pi*n*f);
c = cos(2*pi*n*f);
%
P
Q
R
D

make P,Q,R as specified in the algorithm, (1 x M)


= sum(s.*s,1);
= sum(c.*s,1);
= sum(c.*c,1);
= P.*R-Q.*Q;

% processor time in (stopwatch start); after this functions of x are


% calculated
tic;
% make W and X vectors as specified in the algorithm, (1 x M)
W = -x(1:N)*s;
X = -x(1:N)*c;
%
a
b
%
A
B

make a and b vectors as specified in the algorithm, (1 x M)


= (Q.*X-R.*W)./(P.*R-Q.*Q);
= (Q.*W-R.*X)./(P.*R-Q.*Q);
transform a and b into matrices to use matrix calculations
= oneN*a;
= oneN*b;

% calculate error vector e, (1 x M) (as a function of f)


xest = A.*s + B.*c;
xM = x(1:N)*oneM;
e = sum( (xest-xM).^2 ,1 );
% parabolic interpolation

51
i = ( find(e==min(e)) );
% 3 points for interpolation
e_para = [e(i-1),e(i),e(i+1)];
f_para = [f(i-1),f(i),f(i+1)];
% parabolic linear regression
A_para = [f_para.^2,f_para.^1,f_para.^0];
y = A_para\e_para;
f_est_para = -y(2)/2/y(1);
% estimated frequency
fx_est_para = f_est_para*fs;

% output
%processor time out
% display time is not representable because it is only one measurement
tt=toc;
figure
plot(t,x)
title(signal: sawtooth with noise)
xlabel(time (s))
xlim([0,10*1/fx])
figure
semilogx(f*fs,e,-x)
title(frequency plot)
xlabel(frequency (Hz))
ylabel(error)
xlim([fmax/25,fmax*1.2])
hold on
f_para_plot=f(i-1):.1/fs:f(i+1);
e_para=polyval(y,f_para_plot);
semilogx(f_para_plot*fs,e_para,r)
legend(calculated error,polynomial fit)
% periods
p = N/fs*fx
% number of samples
N
% estimated frequency
fx_est_para

52

APPENDIX C. THE FREQUENCY ESTIMATION PROGRAM

Rate Transition

53

Repmat xM

oneM

Repmat A,B

oneN

D matrix

R matrix

Q matrix

Sinus

Tapped Delay

125
Delays

Cosinus

uT

Gain

-1

Transpose oneM

uT

transpose x
Matrix
Multiply W
Calc W

Matrix
Multiply X
Calc X

R.*X

Q.*W

R.*W

Q.*X

Q.*W-R.*X

Q.*X-R.*W

Calc b

Calc a

Matrix
Multiply B
Repmat B

Matrix
Multiply A
Repmat A

B.*c

A.*s

xest

Matrix
Multiply xM
Repmat

Calc xest

Transpose oneN

Calc error

Square error
Matrix
Multiply
Calc X2
e

Subsystem
Parabolic
Interpolation

fx_ est

1
Out1

Appendix D

The Simulink block scheme

54

APPENDIX D. THE SIMULINK BLOCK SCHEME

2
e

1
f

Idx

Transpose e

uT

Minimum

1
Parabolic
Index +1 Interpolation
matrix1

u+1

Index -1

u-1

e
Selector

In1 Select
Out1
Idx Rows

Frequency
Selector

In1 Select
Out1
Idx Rows

Ones

Square f
2
Parabolic
Interpolation
matrix
X

AX=B (SVD)

[m x n]
SVD Solver
(least squares fit )

Selector y(1)

Selector y(2)
Diff1
Diff 2

-1/2

fs
Gain fs

1
fx_est

Appendix E

The interpolation subsystem

55

56

APPENDIX E. THE INTERPOLATION SUBSYSTEM

Appendix F

The frequency estimation program


at 1 kHz sampling frequency
Only the settings part is shown, the rest of the algorithm is the same.
%
%
%
%
%
%
%
%
%
%
%
%
%
%

% % % %
% % % %
% % % %
% % % %
% % % %
version
version
version
version
version
version
version
version
version

Least square algorithm for estimating the fundamental frequency


author: P.J.H. Maas
date: 16th of april 2008
version: 1.0
version histo
0.1: working with forloops, plotting w-e
0.2: working with frequencies instead of relative frequencies
0.3: matrix calculations instead of forloops for speed
0.4: noise added
0.5: parabolic interpolation
0.6: range and sampling frequency specified
0.7: replaced repmat functions by matrix calculations
0.8: making nppt a accuracy variable
1.0: final

% clean up
clc
close all
clear all

% generate sawtooth signal with frequency fx and sampling frequency fs


fx = 100;
% sawtooth frequency
fs = 1*10^3;
% sampling frequency
fmax = fs/10;
% max frequency to be estimated
nppt = 9;
% number of samples per flank of the trough
N = 80;
% number of used samples (1 periode: fs/fx)

57

58APPENDIX F. THE FREQUENCY ESTIMATION PROGRAM AT 1 KHZ SAMPLING FREQUENCY

Appendix G

Simulation results
The following tables show the results for the simulations with a minimum of five points per
trough and a parabolic interpolation over three points. The empty cells in the table represent
an error. There was no interpolation point. This happens at low frequencies, with a low
sample length.

N
400
200
150
100
75
50
40
30
20
10

50
49.83
49.49
51.67
52.00

75
74.88
75.35
75.21
77.46
75.89

Input frequency (Hz)


100
150
200
300
99.91 149.94 199.96 299.97
99.68 149.77 199.82 299.88
100.44 149.96 199.71 300.17
99.01 150.64 199.37 299.54
103.19 150.43 200.85 299.85
103.67 154.59 198.00 301.23
149.82 202.87 299.97
199.29 302.00
297.92

400
399.98
399.91
399.85
399.65
399.40
398.74
398.27
399.67
405.46

500
499.99
499.94
500.11
499.73
500.50
500.74
498.50
502.01
495.45
511.30

750
749.99
749.95
749.87
750.16
749.47
750.64
748.78
748.93
752.24
763.03

1000
999.99
999.95
999.93
999.84
1000.07
999.37
999.18
998.00
995.66

500
0.010
0.033
0.061
0.080
0.135
0.247
0.269
0.520
1.001
1.969

750
0.008
0.024
0.047
0.102
0.179
0.195
0.408
0.559
0.861
3.786

1000
0.014
0.034
0.065
0.069
0.160
0.253
0.275
0.532
0.879

Table G.1: mean estimated frequency (Hz)

N
400
200
150
100
75
50
40
30
20
10

50
0.011
0.017
0.049
0.075

75
0.010
0.027
0.052
0.083
0.088

100
0.011
0.039
0.041
0.080
0.152
0.134

Input frequency (Hz)


150
200
300
0.011 0.009 0.012
0.027 0.026 0.045
0.062 0.064 0.050
0.081 0.098 0.079
0.153 0.156 0.119
0.175 0.317 0.223
0.261 0.356 0.303
0.383 0.482
0.870

400
0.012
0.029
0.049
0.094
0.147
0.269
0.335
0.547
1.026

Table G.2: standard deviation on estimated frequency (Hz)


59

60

N
400
200
150
100
75
50
40
30
20
10

APPENDIX G. SIMULATION RESULTS

50
0.169
0.508
1.670
2.005

75
0.117
0.349
0.214
2.463
0.888

100
0.091
0.324
0.439
0.992
3.192
3.666

Input frequency (Hz)


150
200
300
0.062 0.040 0.030
0.233 0.175 0.118
0.036 0.287 0.172
0.642 0.629 0.464
0.434 0.851 0.150
4.589 1.999 1.233
0.184 2.874 0.025
0.706 1.997
2.081

400
0.021
0.088
0.153
0.347
0.599
1.263
1.728
0.331
5.458

500
0.015
0.061
0.106
0.272
0.495
0.737
1.503
2.005
4.552
11.299

750
0.010
0.054
0.129
0.155
0.532
0.637
1.216
1.073
2.244
13.031

1000
0.006
0.051
0.069
0.165
0.071
0.627
0.824
2.002
4.335

400
0.0001
0.0002
0.0004
0.0009
0.0015
0.0032
0.0043
0.0008
0.0136

500
0.0000
0.0001
0.0002
0.0005
0.0010
0.0015
0.0030
0.0040
0.0091
0.0226

750
0.0000
0.0001
0.0002
0.0002
0.0007
0.0008
0.0016
0.0014
0.0030
0.0174

1000
0.0000
0.0001
0.0001
0.0002
0.0001
0.0006
0.0008
0.0020
0.0043

400
16
8
6
4
3
2
1.6
1.2
0.8

500
20
10
7.5
5
3.75
2.5
2
1.5
1
0.5

750
30
15
11.25
7.5
5.625
3.75
3
2.25
1.5
0.75

1000
40
20
15
10
7.5
5
4
3
2

Table G.3: absolute error (Hz)

N
400
200
150
100
75
50
40
30
20
10

50
0.0034
0.0102
0.0334
0.0401

75
0.0016
0.0047
0.0029
0.0328
0.0118

Input frequency (Hz)


100
150
200
300
0.0009 0.0004 0.0002 0.0001
0.0032 0.0016 0.0009 0.0004
0.0044 0.0002 0.0014 0.0006
0.0099 0.0043 0.0031 0.0015
0.0319 0.0029 0.0043 0.0005
0.0367 0.0306 0.0100 0.0041
0.0012 0.0144 0.0001
0.0035 0.0067
0.0069

Table G.4: relative error

N
400
200
150
100
75
50
40
30
20
10

50
2
1
0.75
0.5

Input frequency (Hz)


75
100
150
200
300
3
4
6
8
12
1.5
2
3
4
6
1.125 1.5
2.25
3
4.5
0.75
1
1.5
2
3
0.5625 0.75
1.125 1.5
2.25
0.5
0.75
1
1.5
0.6
0.8
1.2
0.6
0.9
0.6

Table G.5: number of periods in sample

You might also like