You are on page 1of 5

DYNAMIC LOAD MODELING BASED ON A NONPARAMETRIC ANN

A.P. Alves da Silva C. Ferreira G. Lambert Torres


Escola Federal de Engenharia de ltajuba
lnstituto de Engenharia Eletrica
BRAZIL

Abstract: Accurate dynamic load models allow This approach does not seem to be appropriate for
more precise calculations of power system large systems, since the determination of an average
controlsand stability limits. System identification (and precise) composition for each load bus of
methods can be applied to estimate load models interest is virtually impossible. The second approach
based on measurements. Parametric and does not suffer from this drawback, since the load to
nonparametric (functional) are the two main be modelled can be assumed a "black-box".
classes in system identification methods. The However, a significant amount of data related to
parametric approach has been the only one used programmed tests and natural disturbances affecting
for load modeling so far. In this paper. the the system needs to be collected.
performance of a functional load model based on Considering the shortcomings of the two
a polynomialartificial neural network is compared approaches, and the fact that data acquisition (and
with a linear model and with the popular 'ZIP' processing) systems are becoming very cheap, it
model. The impact of clustering different load seems that the system identification approach is more
compositions is also investigated. Substation in accordance to current technology. This approach
buses (138 kV) from the Brazilian system feeding allows real-time load monitoring and modeling.
important industrial consumers have been Parametric I31 and nonparametric (functional) I41
modeled. are the t w o main classes in system identification
methods. The parametric methods assume a known
Keywords: Load Modeling, Neural Networks, model structure with unknown parameters. Methods
Stability Studies. in this class have been the only ones used for load
modeling so far. Their performance depends on a
1. INTRODUCTION good guess of the model order, which generally
requires previous knowledge of the load
Independent of the study to be performed, characteristics.
literature has shown the fundamental importance of In recent years, artificial neural networks (ANNs)
power system component modeling. Therefore, have been used for dynamic load modeling due to
accurate models for transmission lines, transformers, features such as nonlinear mapping and generalization
generators, regulators and compensators have been capability I5,6,71. The multilayer perceptron trained
proposed. However, the same has not happen to load with error back propagation has been employed.
models. Although the importance of load modeling is However, it has several shortcomingssuch as difficult
well known, specially for transient and dynamic setting of learning parameters, slow convergence,
stability studies, the random nature of a load training failures due to local minima, pre-specified
composition makes its representation very difficult. architecture (parametric model).
Two approaches have been used for load In this paper, the performanceof a functional load
modeling. In the first one, based on the knowledge of model based on a polynomial ANN I81 is compared
the individual load components, the load model is with a linear model I21 and with the "ZIP" (constant
obtained through the aggregation of the individual impedance, current and power) model 111. The impact
load component models 111. The second approach of clustering different load compositions is also
does not require the knowledge of the load physical investigated.
characteristics. Based on measurements related to the
load responses to disturbances, the model is 2. LOAD MODELS
estimated using system identification methods 121.
The composition approach has the disadvantage of The main idea is to obtain a model that represents
requiring information that is not generally available. the variation of electrical system loads, taking into

0-7803-3 115-X/96$5.00 0 1996 IEEE. 55


consideration the variations of voltage and frequency. unknown. Therefore, there is no risk of using an ANN
more complex than necessary, avoiding overfitting the
2.1 ZIP Model data and consequently loosing generalization
capability.
The ZIP model is a classic static load model. It is Each neuron output can be expressed by a second-
usually not an accurate model for the majority of order polynomial function as shown in Figure 2,
power system loads. However, many programs wherex;andx,(i.e., ~ P ( k - r ) b, V ( k - m ) , ~ f ( k - n ) )
employed by the electric utilities use this type of are inputs and A, B, C,0,E, and Fare the polynomial
representation. coefficients, which are equivalent to the network
The ZIP model can be expressed by Equation ( 11, weights, and y ( A P ( ~ )or A Q ( ~ ) )is the neuron
and is also known by constant impedance, current output.
constant and constant power model.
P(V) = P o ( a + b V + c V * ) R
where:
Q( V ) = Qo ( d + eV + fV2)

- coefficients a and d specify the per unit change in


(1)
A
x x
real and reactive load that behaves as constant
power,
- coefficients b and e specify the per unit Change in
real and reactive load that behaves as constant
current, and
- coefficients c and f specify the per unit change in Fig. 1: Polynomial Network.
real and reactive load that behaves as constant
impedance. The Group Method of Data Handling (GMDH) I81
training algorithm can be used to adjust the
2.2 Linear Model [21 polynomial coefficients and to find the network
architecture.
The method allows the selection of the load model
order. The general form of this load model is Y Y
described in Equation (2).
4 4
nP nV nf
AP(k) = xd1AP(k-i) + x6,AV(k-/) + xi?,Af(k-i)
1.1 I-0 1-0
(2)
no n: e
AQ(k) = rdlAQ(k-l) + C191AV(k-l) + C O , A f ( k - i )
1-1 1-0 1- 0

mHeAP,Q,V,f(k) = P , Q , V , f ( k + l ) - P , Q , V , f ( k ) . Fig.2: Neuron Model of a Polynomial Network where


The parameters &/, bl, ..., and &, are calculated y = A + Bx, + CX, + OX: + E$ + Fx$I
using least squares estimation, while the pre-selected
order values (n,.,,nv, ..., nJ define the structure of the The GMDH algorithm employs two data sets. One
model. The order values are chosen according to the for estimating the network weights (training) and the
kind of load. other for testing which neurons should survive during
the training process. Methods based on stochastic
2.3 Functional Load Model Based on Polynomial ANN search, such as genetic algorithms and simulated
annealing have been applied for training polynomial
The architecture of a polynomial network is formed networks [4,91. Besides, the high computational
during the training process (nonparametric model). effort, these variations of the basic GMDH algorithm
The neuron activation function is based on elementary include a network complexity measure in the error
polynomial of arbitrary order. A polynomial network is function to be minimized. This allows the elimination
shown in Figure 1. In this example, the network has of the test set data that can be useful for problems
seven inputs, although the network uses only five of with few available samples. However, these criteria
them. This is due to the automatic input selection for automatic selection of the network architecture do
capability of the training algorithm. Automatic input not guaranty optimality (maximum generalization
selection is very useful when the load structure is capability).

56
The GMDH allows the construction of layers, one behavior, probably associated with the same load
at a time, beginning at the input layer. Two different composition, in order to decrease the extent of
data sets, totalizing N input/output patterns are used nonlinearity that should be provided by a single load
for training (N, samples) and testing (N - Nrsamples). model. An ART I1 01like neural network model is used
The GMDH algorithm is described below (a second for the clustering task. It has been employed because
order elementary polynomial is assumed): there is no need for pre-specifying the number of
groups. A model for each group is created after
Step I)Combine the m inputs (to the current layer), dividing the set of disturbances related to a bus of
two by two, in order to form the following linear interest in groups.
systems of equations: The voltage and/or frequency disturbances and
y1 = + CX,, + D
A + BJC,~ d + + FJC,~]
respective load variations can be represented by
y2 = A + B + + c + , + D ~ + E ~ + F ~ points in a multi-dimensionalspace. For instance:
(3) x = ( P ( k ) , P(k-1), P ( k - 2 ) , ... , f ( k ) ,
0
UN, = A + BxN,f + cxN,, + D%& + Ex& + FxN,flNl]
f ( t - l ) , f(k-2). ..., V(k), V(k-1), V(k-2), ... )
Step Ill Estimate the coefficients (weights)A, B, C,0, Procedures to find groups of similar points are
E, and, F solving the system equations above for the known as nonsupervised learning mechanisms. Such
training set (in the sense of least squares). Then, the procedures try to identify prototypes or exemplars
neuron output is represented by the following that may serve as the best representations for each
equation: group. A prototype can be one of the defined points
for the measured disturbancesor the geometric center
y = A + Bx, + CX,+ B x ~+ 4 + Fx,x~
(4) of the group.
Step 111) Evaluate each of the neurons created in Step The metric used in this work is the Euclidian
II using the test set. distance. The algorithm is described step by step
N-N,
below:
r',
c (nr -
yN,
ZIkP
; k = 1,2. ...,m(m-1)
I - At the beginning of the training process the units
=
-2 (5) in the output layer are not activated.
cA I-
a) choose a vigilance parameter p .
b) choose the first entering pattern as the first
where yfr and z, represent the desired output values
group prototype.
and the values obtained from each of the m(m-lM2
II - Perform iterations until none of the training
neurons, respectively.
examples cause any change in the set of
prototype vectors.
Step IV) Sort the 4
values. Compare the minimum a) choose a training sample xk in cyclic order.
value with the minimum 6 of the previous layer. If b) find the P, prototype closest (distanced) to the
the current error is greater than the previous error, training sample xk.
stop (end of the algorithm). Otherwise, save the c) test if P, is similar enough to sample xk.
more fittest neurons and go back to Step I. d) if d < then:
The advantages of the polynomial network trained - sample xk belongs to the group represented
with the Group Method of Data Handling are: by prototype PI.
- the ANN architecture and its inputs are - change PI to get closer to sample xk.
automatically defined by the training process; - go back to step II to choose a new sample.
- there is no learning parameter to be set; otherwise:
- no local minima problem; and, - x, does not belong to any existent group.
- fast convergence. Create a new group and set the group
prototype equal to xk.
3. LOAD CLUSTERING
4. TEST RESULTS
Electric loads have different compositions through
time. Therefore, in order to improve the performance Substations buses ( 138 kV) from the Brazilian
of the load model it is recommendable to estimate system feeding important industrial consumers (steel-
more than one model for each load bus. The idea is to making, mining, ferroalloy and chemical) have been
separate input/output signals showing similar modeled. The data acquisition sampling period is

57
10 ms. Some tests for a 138 kV bus are presented in The ANN training time for the first group of active
Figures 3 to 6. Twenty nine measured disturbances power is five minutes on a 486 DX2-66 MHz.
are used.
The disturbances are clustered as follows: a c l i v e power lMWl
79
- Active Power (2 groups)
1) 18 disturbances (1 5 for training, 3 for testing) ,--.-
,:-_---,

2) 1 1 disturbances ( 9 for training, 2 for testing)


- Reactive Power (2 groups)
1) 12 disturbances ( 8 for training, 4 for testing)
2) 17 disturbances (15for training, 2 for testing)
a c l i v e power JMW/
79
1
O I 2 3 4 5
lime 1st
-MEASUREMENT --- Z I P - - LINEAR -NEURAI

Mean Absolute Errors: 1.18% (ZIP), 0.42% (Linear),


0.29% (ANN)
Fig.4a: Test Disturbance from the la Group (PI.
vollage IkV] frequency [Hzl
0 I 2 3 4 5 147 60 2
lime I s ] I
-MEASUREMENT --- Z I P LINEAR -NEURAL -- 6 0 '
n
Mean Absolute Errors: 2.47% (ZIP), 0.58% (Linear),
- 60
0.41% (ANN)
Fig.3a: Test Disturbance from the la Group (PI.
-- 53 3
voltage lkV] frequency [ H I ]
144, r602
0 1 2 3 4
'
5
538

Lime Is/
-VOLTAGE - FREQUENCY

Fig.4b: Test Disturbance from the IsGroup (V, f).

5. CONCLUSIONS
136 1
0 1 2 3 4
L 53.8
5 The main conclusions of this work are:
lime Is1 - The complex load composition makes its dynamic
-VOLTAGE - FREQUENCY
behavior difficult to the modeled.
- Clustering different load compositions improves the

Fig.3b: Test Disturbance from the 1" Group (V, f ) . models accuracy. The ANN performance is less
sensitive to the number of load groups.
The clustering procedure determines when the load - The ANN load model has shown the best
model is supposed to be accurate (based on the performance by a significant margin.
similarity between the test and training disturbances). - The linear load model has been superior to the ZIP
As the number of disturbances available for each bus model for all tests disturbances regarding active
is usually limited, cross-validation or bootstrapping power. However, in some tests, the ZIP model has
techniques should be applied to estimate the load been better than the linear model for reactive
model error rate. power. This is due to the quadratic voltage term of
Although sometimes the errors for the linear and the ZIP model.
ANN load models are similar, the selection of an - The ZIP model has not been capable of modeling
appropriate order for the linear system is not easy. the load dynamics appropriately.

58
- It has been noticed that the order of the elementary
polynomial affects the ANN performance. Future voltage (kV1 f r e q u e n c y IHzl

work will focus on solving this problem. 1


r e a c t i v e power /MVArl
32

1381
0 1 2 3 4 5
lime 1st
-VOLTAGE - FREQUENCY
16
0 1 2 3 4 5
lime I s / Fig.6b: Test Disturbance from the 2& Group (V, f).
-MEASUREMENT --- Z I P LINEAR -NEURAL
6. ACKNOWLEDGMENTS
Mean Absolute Errors: 10.13% (ZIP), 8.79% (Linear),
6.53% (ANN) The authors would like to thank the Brazilian Agencies
Fig.5a: Test Disturbance from the la Group (0). FAPEMIG and CNPq for the financial support.

7. REFERENCES

L l l IEEE Task Force on Load Representation for Dynamic


146 6 0 05 Performance: "Bibliography on Load Models for Power Flow and
Dynamic Performance Simulation", IEEE Trans. Power Syst., Vol.
10, 1995, pp. 523-538.
60
t21 T. Dovan, T.S. Dillon, C.S. Berger, and K.E. Forward: 'A
59 95 Microcomputer Based On-tine Identification Approach to Power
System Dynamic Load Modelling', IEEE Trans. Power Syst., Vol.
PWRS-2, 1987, pp. 529-536.
1371
0 1 2 3 4
'
5
599
131 A.P. Alves da Silva and V.H. Quintana: "Pattern Analysis in
lime Is] Power System State Estimation", Elect. Power & Energy Syst., Vol.
-VOLTAGE - FREQUENCY 17, 1995, pp. 51-60.

141 M.F. Tenorio and W.-1. Lee: "Self-Organizing Network for


Fig.5b: Test Disturbance from the la Group (V, f). Optimum Supervised Learning', IEEE Trans. Neural Nets., Vol. 1,
1990, pp. 100-110.
r e a c l i v e Dower IMVArI
151 B.-Y. Ku, R.J. Thomas, C.-Y. Chiou, and C.-J. Lin: "Power
System Dynamic Load Modeling Using Artificial Neural Networks",
IEEE Trans. Power Syst., Vol. 9, 1994, pp. 1868-1874.

f61 T.T. Nguyen and H.X. Buy: "Neural Network Dynamic Load
Model", @ ESAP, Melbourne, Jan. 1993, pp. 467-472.

171 H. Ren-mu and A.J. Germond: "Comparasion of Dynamic Load


Modeling Using Neural Network and Tradicional Method", 2%
ANNPS, Yokohama, April 1993, pp. 253-258.
1
0 1 2 3 4 5 (81S.J. Farlow (Ed.): Self-Oraanizina Methods in Modeling, Marcel
time I s / Dekker, 1984.
-MEASUREMENT --- ZIP LINEAR -NEURAL
I91 H. Kargupta and R.E. Smith: "System Identification with
Evolving Polynomial Networks", @ Intern. Conf. Genetic
Mean Absolute Errors: 6.05% (ZIP), 6.51 96 (Linear), Algorithms, Sen Diego, July 1991, pp. 370-376.
4.51% (ANN)
[ l o ] G. Carpenter and S. Grossberg: "ART2: Self-Organization of
Fig.6a: Test Disturbance from the 2& Group (0). Stable Category Recognition Codes for Analog Input Patterns",
Applied Optics, Vol. 26, 1987, pp. 491 94930.

59

You might also like