You are on page 1of 76

ABSTRACT

Self-commutating multilevel current reinjection is a potential alternative to conventional


HVDC thyristor technology. An important drawback of the multilevel configurations is the
interdependence of the reactive power injections at both ends of the link. This paper describes a
new concept applicable to large power converters consisting of two series-connected twelvepulse groups. It is based on the use of a controllable shift between the firings of the two twelvepulse groups in opposite directions, a new concept that provides independent reactive power
control at the sending and receiving ends. The theory is verified by EMTDC simulation.

INTRODUCTION
OWING to their structural simplicity and four quadrant power controllability, pulse width
modulation (PWM) conversion has so far been the preferred option for self-commutating
medium power HVDC transmission. However, this technology is less suited to large power
ratings and long distances, due to higher switching losses and to the rating limitations of its main
components (namely the power transistor switch and underground cable). Thus the interchange
of large quantities of power between separate power systems and the transmission of power from
remote generating stations are still based on the principle of line-commutated current source
conversion.
Multilevel VSC configurations have been presented as possible alternatives to PWMVSC Transmission, but their structural complexity has been the main obstacle to their
commercial implementation. A recent proposal, the multilevel current reinjection (MLCR)
concept, simplifies the converter structure and permits the continued use of conventional
thyristors for the main converter bridges. The main advantage of self over natural-commutation
in HVDC transmission is the ability to control independently the reactive power at each end of
the link, a property that cannot be achieved by MLCR-based (or any other multilevel)
configuration when using only one double-bridge converter group.
However, interconnections of large power ratings will normally use two or more 12-pulse
converter groups and these can be controlled independently from each other without affecting the
output voltage waveform. This fact constitutes the basis of the new control scheme proposed
here. When the operating condition at one end of the link alters the reactive power balance at this
end, the firings of the two groups at the other end are shifted with respect to each other in
opposite directions to keep the power factor constant. The new control concept gives the MLCR
configuration described in the flexibility until now only available to PWMVSC transmission.

HVDC
Over long distances bulk power transfer can be carried out by a high voltage direct
current (HVDC) connection cheaper than by a long distance AC transmission line. HVDC
transmission can also be used where an AC transmission scheme could not (e.g. through very
long cables or across borders where the two AC systems are not synchronized or operating at
the same frequency). However, in order to achieve these long distance transmission links,
power convertor equipment is required, which is a possible point of failure and any interruption
in delivered power can be costly. It is therefore of critical importance to design a HVDC
scheme for a given availability.
The HVDC technology is a high power electronics technology used in electric power
systems. It is an efficient and flexible method to transmit large amounts of electric power over
long distances by overhead transmission lines or underground/submarine cables. It can also be
used to interconnect asynchronous power systems
The fundamental process that occurs in an HVDC system is the conversion of electrical
current from AC to DC (rectifier) at the transmitting end and from DC to AC (inverter) at the
receiving end.
There are three ways of achieving conversion
1.

Natural commutated converters

2.

Capacitor Commutated Converters

3.

Forced Commutated Converters

Natural commutated converters: (NCC)


NCC are most used in the HVDC systems as of today. The component that enables this
conversion process is the thyristor, which is a controllable semiconductor that can carry very high
currents (4000 A) and is able to block very high voltages (up to 10 kV). By means of connecting
the thyristors in series it is possible to build up a thyristor valve, which is able to operate at very
high voltages (several hundred of kV).The thyristor valve is operated at net frequency (50 Hz or 60
Hz) and by means of a control angle it is possible to change the DC voltage level of the bridge..
Capacitor Commutated Converters (CCC).

An improvement in the thyristor-based Commutation, the CCC concept is characterized by


the use of commutation capacitors inserted in series between the converter transformers and the
thyristor valves. The commutation capacitors improve the commutation failure performance of
the converters when connected to weak networks.
Forced Commutated Converters.
This type of converters introduces a spectrum of advantages, e.g. feed of passive networks
(without generation), independent control of active and reactive power, power quality. The valves
of these converters are built up with semiconductors with the ability not only to turn-on but also
to turn-off. They are known as VSC (Voltage Source Converters). A new type of HVDC has
become available. It makes use of the more advanced semiconductor technology instead of
thyristors for power conversion between AC and DC. The semiconductors used are insulated gate
bipolar transistors (IGBTs), and the converters are voltage source converters (VSCs) which
operate with high switching frequencies (1-2 kHz) utilizing pulse width modulation (PWM).
Configurations of HVDC
There are different types of HVDC systems which are
Mono-polar HVDC system:
In the mono-polar configuration, two converters are connected by a single pole line and a
positive or a negative DC voltage is used. In Fig there is only one Insulated transmission
conductor installed and the ground or sea provides the path for the return current.

Bipolar HVDC system:


This is the most commonly used configuration of HVDC transmission systems. The bipolar
configuration, shown in Fig. Uses two insulated conductors as Positive and negative poles. The
two poles can be operated independently if both Neutrals are grounded. The bipolar
configuration increases the power transfer capacity. Under normal operation, the currents
flowing in both poles are identical and there is no ground current. In case of failure of one pole
power transmission can continue in the other pole which increases the reliability. Most overhead
line HVDC transmission systems use the bipolar configuration.

Homo-polar HVDC system:


In the homo polar configuration, shown in Fig. Two or more conductors have the
negative polarity and can be operated with ground or a metallic return. With two Poles
operated in parallel, the homopolar configuration reduces the insulation costs. However, the
large earth return current is the major disadvantage.

Multi-terminal HVDC system:


In the multi terminal configuration, three or more HVDC converter stations are geographically
separated and interconnected through transmission lines or cables. The system can be either
parallel, where all converter stations are connected to the same voltage as shown in Fig(b). or
series multi terminal system, where one or more converter stations are connected in series in one
or both poles as shown in Fig. (c). A hybrid multi terminal system contains a combination of
parallel and series connections of converter stations.

VOLTAGE-SOURCE CONVERTER
A voltage-source converter is connected on its ac-voltage side to a three-phase electric
power network via a transformer and on its dc-voltage side to capacitor equipment. The
transformer has on its secondary side a first, a second, and a third phase winding, each one with a
first and a second winding terminal. Resistor equipment is arranged at the transformer for
limiting the current through the converter when connecting the transformer to the power
network. The resistor equipment includes a first resistor, connected to the first winding terminal
of the second phase winding, and switching equipment is adapted, in an initial position, to block
current through the phase windings, in a transition position to form a current path which includes
at least the first and the second phase windings and, in series therewith, the first resistor, which
current path, when the converter is connected to the transformer, closes through the converter
and the capacitor equipment, and, in an operating position, to interconnect all the first winding
terminals for forming the common neutral point. In VSC HVDC, Pulse Width Modulation
(PWM) is used for generation of the fundamental voltage. Using PWM, the magnitude and phase
of the voltage can be controlled freely and almost instantaneously within certain limits.
This allows independent and very fast control of active and reactive power flows. PWM
VSC is therefore a close to ideal component in the transmission network. From a system point of
view, it acts as a zero inertia motor or generator that can control active and reactive power almost
instantaneously. Furthermore, it does not contribute to the short circuit power, as the AC current
can be controlled.
Voltage Source Converter based on IGBT technology
The modular low voltage power electronic platform is called PowerPak. It is a power
electronics building block (PEBB) with three integrated Insulated Gate Bipolar Transistor
(IGBT) modules. Each IGBT module consists of six switches forming three phase legs. Various
configurations are possible. For example three individual three-phase bridges on one PEBB, one
three phase bridge plus chopper(s) etc. The Power Pak is easily adaptable for different
applications. The IGBT modules used are one Power Pak as it is used for the SVR. It consists of
one three-phase bridge (the three terminals at the right hand side), which provides the input to
the DC link (one IGBT module is used for it) and one output in form of one single phase H-

bridge (the two terminals to the left) acting as the booster converter. For the latter two IGBT
modules are used with three paralleled phase legs per output terminal. By paralleling such
PEBBs adaptation to various ratings is possible.
GTO/IGBT (Thyristor based HVDC):
Normal thyristors (silicon controlled rectifiers) are not fully controllable switches (a "fully
controllable switch" can be turned on and off at will.) Thyristors can only be turned ON and
cannot be turned OFF. Thyristors are switched ON by a gate signal, but even after the gate signal
is de-asserted (removed), the thyristor remains in the ON-state until any turn-off condition occurs
(which can be the application of a reverse voltage to the terminals, or when the current flowing
through (forward current) falls below a certain threshold value known as the holding current.)
Thus, a thyristor behaves like a normal semiconductor diode after it is turned on or "fired". The
GTO can be turned-on by a gate signal, and can also be turned-off by a gate signal of negative
polarity.

Turn on is accomplished by a positive current pulse between the gate and cathode terminals. As
the gate-cathode behaves like PN junction, there will be some relatively small voltage between
the terminals. The turn on phenomenon in GTO is however, not as relievable as an
SCR(thyristor) and small positive gate current must be maintained even after turn on to improve
reliability.
Turn off is accomplished by a negative voltage pulse between the gate and cathode terminals.
Some of the forward current (about one third to one fifth) is "stolen" and used to induce a
cathode-gate voltage which in turn induces the forward current to fall and the GTO will switch
off (transitioning to the 'blocking' state.)
GTO thyristors suffer from long switch off times, whereby after the forward current falls, there
is a long tail time where residual current continues to flow until all remaining charge from the
device is taken away. This restricts the maximum switching frequency to approx 1kHz. It may
however be noted that the turn off time of a comparable SCR is ten times that of a GTO.Thus
switching frequency of GTO is much better than SCR.

Gate turn-off (GTO) thyristors are able to not only turn on the main current but also turn it off,
provided with a gate drive circuit. Unlike conventional thyristors, they have no commutation
circuit, downsizing application systems while improving efficiency. They are the most suitable
for high-current, high speed switching applications, such as inverters and chopper circuits.
Bipolar devices made with SiC offer 20-50X lower switching losses as compared to
conventional semiconductors. A rough estimate of the switching power losses as a function of
switching frequency is shown in Figure 4. Another very significant property of SiC bipolar
devices is their lower differential on-state voltage drop than similarly rated Si bipolar device,
even with order of magnitude smaller carrier lifetimes in the drift region.
This property allows high voltage (>20 kV) to be far more reliable and thermally stable
as compared to those made with Silicon. The switching losses and the temperature stability of
bipolar power devices depends on the physics of operation of the device. The two major
categories of bipolar power devices are: (a) single injecting junction devices (for example BJT
and IGBT); and (b) double injecting junction devices (like Thyristor-based GTO/MTO/JCT/FCT
and PIN diodes).
In a power BJT, most of the minority carrier charge resides in the low doped collector
layer, and hence its operation has been approximated as an IGBT. The limited gain of a BJT will
make the following analysis less relevant for lower voltage devices.
Silicon carbide has been projected to have tremendous potential for high voltage solidstate power devices with very high voltage and current ratings because of its electrical and
physical properties. The rapid development of the technology for producing high quality single
crystal SC wafers and thin films presents the opportunity to fabricate solid- state devices with
power-temperature capability far greater than devices currently available. This capability is
ideally suited to the applications of power conditioning in new more- electric or all-electric
military and commercial vehicles.
These applications require switches and amplifiers capable of large currents with
relatively low voltage drops. One of the most pervasive power devices in silicon is the Insulated
Gate Bipolar Transistor (IGBT). However, these devices are limited in their operating
temperature and their achievable power ratings compared to that possible with SiC. Because of

the nearly ideal combination of characteristics of these devices, we propose to demonstrate the
first 4H-SiC Insulated Gate Bipolar Transistor in this Phase I effort. Both n-channel and p
channel SiC IGBT devices will be investigated. The targeted current and voltage rating for the
Phase I IGBT will be a >200 Volt, 200 mA device, that can operate at 350 C.
12-pulse converters:
The basic design for practically all HVDC converters is the 12-pulse double bridge
converter which is shown in Figure below. The converter consists of two 6-pulse bridge
converters connected in series on the DC side. One of them is connected to the AC side by a YYtransformer, the other by a YD transformer. The AC currents from each 6-pulse converter will
then be phase shifted 30. This will reduce the harmonic content in the total current drawn from
the grid, and leave only the characteristic harmonics of order 12 m1, m=1,2,3..., or the 11th,
13th, 23th, 25th etc. harmonic. The non-characteristic harmonics will still be present, but
considerably reduced. Thus the need for filtering is substantially reduced, compared to 6-pulse
converters. The 12-pulse converter is usually built up of 12 thyristor valves.
Each valve consists of the necessary number of thyristors in series to withstand the
required blocking voltage with sufficient margin. Normally there is only one string of thyristors
in each valve, no parallel connection. Four valves are built together in series to form a quadruple
valve and three quadruple valves,

Figure:12-pulse converter.

Main elements of a HVDC converter station with one bipolar consisting of two 12-pulse
converter unit.
Together with converter transformer, controls and protection equipment constitute a converter.
The converter transformers are usually three winding transformers with the windings in Yy d N-

connection. There can be one three-phase or three single phase transformers, according to local
circumstances. In order to optimize the relationship between AC- and DC voltage the converter
transformers are equipped with tap changers.
HVDC converter stations
An HVDC converter station is normally built up of one or two 12-pulse converters as
described above, depending on the system being mono- or bipolar. In some cases each pole of a
bipolar system consists of two converters in series to increase the voltage and power rating of the
transmission. It is not common to connect converters directly in parallel in one pole. The poles
are normally as independent as possible to improve the reliability of the system, and each pole is
equipped with a DC reactor and DC filters. Additionally the converter station consists of some
jointly used equipment. This can be the connection to the earth electrode, which normally is
situated

Mono-polar HVDC transmission Voltage in station B according to reversed polarity


convention.
Some distance away from the converter station area, AC filters and equipment for supply of the
necessary reactive power.
BASIC CONTROL PRINCIPLES
DC transmission control
The current flowing in the DC transmission line shown in Figure below is determined by the DC
voltage difference between station A and station B. Using the notation shown in the figure, where
rd represents the total resistance of the line, we get for the DC current

And the power transmitted into station B is

In rectifier operation the firing angle should not be decreased below a certain minimum value
min, normally 3-5 in order to make sure that there really is a positive voltage across the valve
at the firing instant. In inverter operation the extinction angle should never decrease below a
certain minimum value min, normally 17-19 otherwise the risk of commutation failures
becomes too high.
On the other hand, both and should be as low as possible to keep the necessary
nominal rating of the equipment to a minimum. Low values of and also decrease the
consumption of reactive power and the harmonic distortion in the AC networks.
To achieve this, most HVDC systems are controlled to maintain = min in normal
operation. The DC voltage level is controlled by the transformer tap changer in inverter station
B. The DC current is controlled by varying the DC voltage in rectifier station A, and thereby the
voltage difference between A and B. Due to the small DC resistances in such a system, only a
small voltage difference is required, and small variations in rectifier voltage gives large
variations in current and transmitted power. The DC current through a converter cannot change
the direction of flow. So the only way to change the direction of power flow through a DC
transmission line is to reverse the voltage of the line. But the sign of the voltage difference has to
be kept constantly positive to keep the current flowing. To keep the firing angle as low as
possible, the transformer tap changer in rectifier station A is operated to keep on an operating
value which gives only the necessary margin to min to be able to control the current.
Converter current/voltage characteristics
The resistive voltage drop in converter and transformer, as well as the non current voltage drop
in the thyristor valves are often disregarded in practical analysis, as they are normally in the
magnitude of 0.5 % of the normal operating voltage. The commutation voltage drop, however,

has to be taken into account as this is in the magnitude of 5 to 10 % of the normal operating
voltage. The direct voltage Ud from a 6-pulse bridge converter can then be expressed by

Where is the firing angle,


If the converter is operating as inverter it is more convenient to operate with extinction angle
instead of firing angle . The extinction angle is defined as the angle between the end of
commutation to the next zero crossing of the commutation voltage. Firing angle , commutation
angle and extinction angle are related by

In inverter mode, the direct voltage from the inverter can be written as

The current/voltage characteristics expressed in above are shown for normal values of id and
dxN. In order to create a characteristic diagram for the complete transmission, it is usual to define
positive voltage in inverter operation in the opposite direction compared to rectifier operation. It
is clear that to operate both converters on a constant firing/extinction angle principle is like
leaving them without control. This will not give a stable point of operation, as both
characteristics have approximately the same slope. Small differences appear due to variations in
transformer data and voltage drop along the line.
To gain the best possible control the characteristics should cross at as close to a right
angle as possible. This means that one of the characteristics should preferably be constant
current. This can only be achieved by a current controller.

If the current/voltage diagram of the rectifier is combined with a constant current controller
characteristic we get the steady state diagram in Figure below for converter station A. A similar
diagram can be drawn for converter station B. If we apply the reversed polarity convention for
the inverter and combine the diagrams for station A and station B we get the diagram in Figure
below In normal operation, the rectifier will be operating in current control mode with the firing
angle

Steady state ud/id diagram for converter station A

Steady state ud/id diagram for

converter station A&B


The inverter has a slightly lower current command than the rectifier and tries to decrease the
current by increasing the counter voltage, but cannot decrease beyond min. Thus we get the
operating point A. We assume that the characteristic for station B is referred to station A that is it
is corrected for the voltage drop along the transmission line. This voltage drop is in the
magnitude of 1-5 % of the rated DC voltage.
If the AC voltage at the rectifier station drops, due to some external disturbance, the
voltage difference is reduced and the DC current starts to sink. The current controller in the
rectifier station starts to reduce the firing angle , but soon meets the limit min, so the current

cannot be upheld. When the current sinks below the current command of the inverter, the inverter
control reduces the counter voltage to keep the current at the inverter current command, until a
new stable operating point B is reached. If the current command at station A is decreased below
that of station B, station A will see a current that is to high and start to increase the firing angle ,
to reduce the voltage. Station B will see a diminishing current and try to keep it up by increasing
the extinction angle to reduce the counter voltage.
Finally station A meets the min limit and cannot reduce the voltage any further and the
new operating point will be at point C. Here the voltage has been reversed to negative while the
current is still positive, that is the power flow has been reversed. Station A is operating as
inverter and station B as rectifier. The difference between the current commands of the rectifier
and the inverter is called the current margin.
It is possible to change the power flow in the transmission simply by changing the sign of
the current margin, but in practice it is desirable to do this in more controllable ways. Therefore
the inverter is normally equipped with a min limitation in the range of 95-105. To avoid
current fluctuations between operating points A and B at small voltage variations the corner of
the inverter characteristic is often cut off. Finally, it is not desirable to operate the transmission
with high currents at low voltages, and most HVDC controls are equipped with voltage
dependent current command limitation.
Master control system
The controls described above are basic and fairly standardized and similar for all HVDC
converter stations. The master control, however, is usually system specific and individually
designed. Depending on the requirements of the transmission, the control can be designed for
constant current or constant power transmitted, or it can be designed to help stabilizing the
frequency in one of the AC networks by varying the amount of active power transmitted. The
control systems are normally identical in both converter systems in a transmission, but the master
control is only active in the station selected to act as the master station, which controls the
current command.
The calculated current command is transmitted by a communication system to the slave
converter station, where the pre-designed current margin is added if the slave is to act as rectifier,

subtracted if it is to act as inverter. In order to synchronize the two converters and assure that
they operate with same current command (apart from the current margin), a telecommunications
channel is required.
Should the telecommunications system fail for any reason, the current commands to both
converters are frozen, thus allowing the transmission to stay in operation. Special fail-safe
techniques are applied to ensure that the telecommunications system is fault-free. The
requirements for the telecommunications system are especially high if the transmission is
required to have a fast control of the transmitted power, and the time delay in processing and
transmitting these signals will influence the dynamics of the total control system.

Comparison of Different HVAC-HVDC


In order to examine the behavior of the losses in combined transmission and not in order
to provide the best economical solutions for real case projects. Thus, most of the configurations
are overrated, increasing the initial investment cost and consequently the energy transmission
cost. The small number of different configurations analyzed provides a limited set of results,
from which specific conclusions can be drawn regarding the energy transmission cost.
Nevertheless, the same approach, as for the individual HVACHVDC systems, is followed in
order to evaluate the energy availability and the energy transmission cost.
Presentation of Selected Configurations and Calculation of the Energy Transmission Cost
For the combined HVAC-HVDC transmission systems only 500 MW and 1000 MW wind farm
were considered. The choices for the transmission distance was limited to 50, 100 and 200 km.
The three following, general combinations were compared:
1. HVAC + HVDC VSC
2. HVAC + HVDC LCC
3. HVDC LCC + HVDC VSC
The specific configurations for each solution, based on the transmission distance and the size of
the wind farm, are presented in Tables.

MULTI-LEVEL CURRENT REINJECTION (MLCR)


Structure and Operating Principles:
MLCR can be considered as the dual of MLVR-VSC. Thus the waveforms developed for
the MLVR configuration, and illustrated in Figure 7.4, also apply to the MLCR configuration. In
this case current, instead of voltage, constitutes the DC source. Unlike MLVR that requires
asymmetrical switches (with unidirectional voltage blocking and bidirectional current
capability), MLCR requires symmetrical switches (with bidirectional voltage blocking and
unidirectional current capability). Thus if IGBT switches were to be used in MLCR conversion, a
diode would have to be connected in series with the IGBT, and the extra power loss may not be
acceptable in high-voltage applications.
Therefore, symmetrical switches, such as the GTO and IGCT, are more appropriate
devices for MLCR conversion, particularly given the lower switching frequencies involved (as
compared with whose of PWM conversion).The use of switches with turn-off capability provides
the reinjection circuit with the opportunity to self-commutate, instead of using the ripple voltage
as the commutating voltage (the solution described in Section 3.8.2), and thus to place at will the
position of the reinjection current pulses.
The same topological structure and basic control strategy of the DC ripple reinjection
concept described in Section 3.8.2 for line-commutated conversion can be used in self
commuting CSC (this has been explained in Section 4.5). When the AC system voltage is
perfectly symmetrical, the 12-pulse voltage ripple is of low amplitude and therefore the
inductance (Lm) required suppressing the DC current fluctuation is very low. In practice,
however, larger values of Lm are needed to cope with the presence of some system asymmetry.
The required levels of current reinjection are produced by tapping the reinjection transformer.
Compared with VSC, where equal size capacitors are required to share the DC voltage,
the reinjection transformer taps in the CSC case can be arranged more flexibly to derive the AC
output current waveforms. Figure.1 shows the (m+1)-level MLCR configuration based on the 12pulse series connected converter; in this figure the reinjection switches are shown as GTOs.

The output currents of the bridges


and

and

are shaped by the reinjection currents

into (m+1)-level waveforms and produce a 12m-pulse equivalent output current

waveform on the primary of the interface transformer.


There are two important differences regarding the double bridge MLCR system with
respect to that of the single bridge. One is the operating frequency, which is now six times
(instead of three) the fundamental; the other is the location of the reinjection point, which is now
the midpoint between the series-connected bridges, instead of the transformer neutral. The
primaries (Np) of two single phase transformers are connected across the bridges terminals and
each of their multi-tapped secondaries (Nk) is periodically connected in series with the DC line,
the repetition period being six times that of the fundamental frequency; this is achieved by firing
simultaneously two opposite-conducting GTOs of the symmetrically placed taps on both sides of
the reinjection transformer secondaries (e.g.

and

).

Figure.1 Structure of the series MLCR-CSC


Similarly, Figure 2 shows the (m+1)-level MLCR configuration based on the 12-pulse
parallel-connected converter. The multi-tapped reactor assisted by the switching action of the
reinjection switches distributes IDC to the two bridges in (m+1)-level waveforms. The main

property of the MLCR scheme is its capability to control the position, magnitude and duration of
the reinjection steps.
If these parameters are optimized to achieve maximum harmonic cancellation, for every
pair of taps symmetrically placed with respect to the two reinjection transformer secondaries, the
pulse number is doubled, with the midpoint tap and short-circuiting switch pair
adding an extra multiplication factor. Thus the five level configuration shown in Figure.3 can
achieve 60-pulse conversion (i.e. 5(reinjection level number)6 (reinjection frequency ratio)2
(number of bridges). An added bonus of the reinjection current is that the converter valve
currents are forced to a very low value (i.e. to an almost ZCS condition) during the
commutations, which simplifies the design of the snubber circuits. The ZSC condition does not
apply to the reinjection switches, but due to the unidirectional nature of the current, the snubber
required can be of the simple resistorcapacitordiode (RCD) type.

Figure 2 Structure of the parallel MLCR-CSC

Figure 3 Five-level reinjection series-connected CSC


Simulated Current Waveforms of 5Level Hybrid MLCR CSC

MLCR-CSC block diagram for the control of the real and reactive powers

MLCR-CSC block diagram for the control of the real and imaginary currents.
Is shown in above Figure. The test carried out assumes that the converter is acting exclusively as
a reactive power controller (i.e. in the STATCOM mode).
Unlike the case of PWM-VSC, due to the low fundamental-frequency-related switching
nature of MLCR, it is not possible in this case to exercise independent control of the amplitude
and power angle. The only variable that controls the MLCR operation is

(the power factor

angle). A similar situation occurred in the case of the MLVR-VSC configuration described
earlier, where

, the phase angle difference between the converter output voltage and the AC

source voltage, was the unique control variable. However, in the latter case only a relatively
small variation of

was needed to achieve four-quadrant operation, whereas in MLCR this

condition requires a variation of

in the range of

.Thus the relationships between

and [P, Q] are highly non-linear. This is shown by the following expressions derived from the
block diagrams of Figures 7.29 and 7.30, where Vt is the rms value of the converter terminal
(phase-to-phase) voltage:

For the test case (STATCOM operation), only a small variation of

around

is needed to

control the DC current.


The amplitude increment or decrement of the DC load branch current

depends on the

polarity of the DC voltage across the load branch, which is proportional to the cosine function of
if the converter voltage

is kept constant and the losses in the

smoothing reactor are ignored. However, the amplitude increment or decrement of the
unidirectional DC cannot be solely determined by the polarity of the power angle increment
around the

mark, because for

Whereas for

Thus the power angle increment

around

has to be coordinated with the converter

operating condition to generate the appropriate polarity of

(a) The generated Q and Qref ;


(b) The average DC current and voltage;
(c) ,(d), (e) the three-phase voltages and currents;
(f) The DC voltage and three-phase currents (with the time scale reduced).

VOLTAGE SOURCE CONVERTERS (VSC)


A voltage-source converter is a power electronic device, which can generate a sinusoidal
voltage with any required magnitude, frequency and phase angle. Voltage source converters are
widely used in adjustable-speed drives, but can also be used to mitigate voltage dips. The VSC is
used to either completely replace the voltage or to inject the missing voltage. The missing
voltage is the difference between the nominal voltage and the actual. The converter is normally
based on some kind of energy storage, which will supply the converter with a DC voltage. The
solid-state electronics in the converter is then switched to get the desired output voltage.
Normally the VSC is not only used for voltage dip mitigation, but also for other power quality
issues, e.g. flicker and harmonics.
The voltage source rectifier operates by keeping the dc link voltage at a desired reference
value, using a feedback control loop as shown in Figure. To accomplish this task, the dc link
voltage is measured and compared with a reference VREF. The error signal generated from this
comparison is used to switch the six valves of the rectifier ON and OFF. In this way, power can
come or return to the ac source according to dc link voltage requirements. Voltage VD is
measured at capacitor CD. When the current ID is positive (rectifier operation), the capacitor CD
is discharged, and the error signal ask the Control Block for more power from the ac supply. The
Control Block takes the power from the supply by generating the appropriate PWM signals for
the six valves. In this way, more current flows from the ac to the dc side, and the capacitor
voltage is recovered. Inversely, when ID becomes negative (inverter operation), the capacitor CD
is overcharged, and the error signal asks the control to discharge the capacitor and return power
to the ac mains. The PWM control not only can manage the active power, but also reactive
power, allowing this type of rectifier to correct power factor. In addition, the ac current
waveforms can be maintained as almost sinusoidal, which reduces harmonic contamination to
the mains supply. Pulse width-modulation consists of switching the valves ON and OFF,
following a pre-established template. This template could be a sinusoidal waveform of voltage or
current. For example, the modulation of one phase could be as the one shown in Fig.

This PWM pattern is a periodical waveform whose fundamental is a voltage with the
same frequency of the template. The amplitude of this fundamental, called VMOD in Fig, is also
proportional to the amplitude of the template. To make the rectifier work properly, the PWM
pattern must generate a fundamental VMOD with the same frequency as the power source.
Changing the amplitude of this fundamental

Operation principle of the voltage source rectifier.

FIGURE A PWM pattern and its fundamental VMOD. And its phase-shift with respect to
the mains, the rectifier can be controlled to operate in the four quadrants: leading power factor
rectifier, lagging power factor rectifier, leading power factor inverter, and lagging power factor
inverter. Changing the pattern of modulation, as shown in Fig, modifies the magnitude of
VMOD. Displacing the PWM pattern changes the phase-shift. The interaction between VMOD
and V (source voltage) can be seen through a phasor diagram.
This interaction permits understanding of the four-quadrant capability of this rectifier. In
Fig., the following operations are displayed: (a) rectifier at unity power factor; (b) inverter at
unity power factor; (c) capacitor (zero power factor); and (d) inductor (zero power factor). In
Fig. Is the rms value of the source current is . This current flows through the semiconductors in
the same way as shown in Fig. 12.40. During the positive half cycle, the transistor TN connected
at the negative side of the dc link is switched ON, and the current is begins to flow through TN
.iTn.

The current returns to the mains and comes back to the valves, closing a loop with
another phase, and passing through a diode connected at the same negative terminal of the dc
link. The current can also go to the dc load (inversion) and return through another transistor
located at the positive terminal of the dc link. When the transistor TN is switched OFF, the
current path is interrupted, and the current begins to flow through diode DP, connected at the
positive terminal of the dc link. This current, called iDp in Fig, goes directly to the dc link,
helping in the generation of the current idc. The current idc charges the capacitor CD and permits
the rectifier to produce dc power. The inductances LS are very important in this process, because
they generate an induced voltage that allows conduction of the diode DP. A similar operation
occurs during the negative half cycle, but with TP and DN

Changing VMOD through the PWM pattern.

Fig. Four-quadrant operation of the force-commutated rectifier: (a) the PWM force-commutated
rectifier; (b) rectifier operation at unity power factor; (c) inverter operation at unity power factor;
(d) capacitor operation at zero power factor; and (e) inductor operation at zero power factor.
Under inverter operation, the current paths are different because the currents flowing
through the transistors come mainly from the dc capacitor CD. Under rectifier operation, the
circuit works like a Boost converter, and under inverter operation it works as a Buck converter.
To have full control of the operation of the rectifier, their six diodes must be polarized negatively
at all values of instantaneous ac voltage supply. Otherwise, the diodes will conduct, and the
PWM rectifier will behave like a common diode rectifier bridge.
The way to keep the diodes blocked is to ensure a dc link voltage higher than the peak dc
voltage generated by the diodes alone, as shown in Fig. In this way, the diodes remain polarized
negatively, and they will conduct only when at least one transistor is switched ON, and favorable
instantaneous ac voltage conditions are given. In Fig. VD represents the capacitor dc voltage,
which is kept higher than the normal diode-bridge rectification value n BRIDGE. To maintain
this condition, the rectifier must have a control loop like the one displayed in Fig.

Fig. Current waveforms through the mains, the valves, and the dc link.
Three-Phase Voltage Source Inverters:
Single-phase VSIs cover low-range power applications and three-phase VSIs cover the
medium- to high-power applications. The main purpose of these topologies is to provide a threephase voltage source, where the amplitude, phase, and frequency of the voltages should always
be controllable. Although most of the applications require sinusoidal voltage waveforms (e.g.,
ASDs, UPSs, FACTS, var compensators), arbitrary voltages are also required in some emerging
applications (e.g., active filters, voltage compensators). The standard three-phase VSI topology is
shown in Fig. and the eight valid switch states are given in Table. As in single-phase VSIs, the
switches of any leg of the inverter (S1 and S4, S3 and S6, or S5 and S2) cannot be switched on
simultaneously because this would result in a short circuit across the dc link voltage supply.
Similarly, in order to avoid undefined states in the VSI, and thus undefined ac output line
voltages, the switches of any leg of the inverter cannot be switched off simultaneously as this
will result in voltages that will depend upon the respective line current polarity.

Of the eight valid states, two of them produce zero ac line voltages. In this case, the ac
line currents freewheel through either the upper or lower components. The remaining states
produce nonzero ac output voltages. In order to generate a given voltage waveform, the inverter
moves from one state to another. Thus the resulting ac output line voltages consist of discrete
values of voltages that are vi, 0, and vi for the topology shown in Fig. The selection of the states
in order to generate the given waveform is done by the modulating technique that should ensure
the use of only the valid states.

PULSE WIDTH MODULATION


What is PWM?
Pulse Width Modulation (PWM) is the most effective means to achieve constant voltage
battery charging by switching the solar system controllers power devices. When in PWM
regulation, the current from the solar array tapers according to the batterys condition and
recharging needs consider a waveform such as this: it is a voltage switching between 0v and 12v.
It is fairly obvious that, since the voltage is at 12v for exactly as long as it is at 0v, then a
'suitable device' connected to its output will see the average voltage and think it is being fed 6v exactly half of 12v. So by varying the width of the positive pulse - we can vary the 'average'
voltage.

Similarly, if the switches keep the voltage at 12 for 3 times as long as at 0v, the average
will be 3/4 of 12v - or 9v, as shown below

And if the output pulse of 12v lasts only 25% of the overall time, then the average is

By varying - or 'modulating' - the time that the output is at 12v (i.e. the width of the
positive pulse) we can alter the average voltage. So we are doing 'pulse width modulation'. I said
earlier that the output had to feed 'a suitable device'. A radio would not work from this: the radio
would see 12v then 0v, and would probably not work properly. However a device such as a
motor will respond to the average, so PWM is a natural for motor control.
Pulse Width modulator
So, how do we generate a PWM waveform? It's actually very easy, there are circuits
available in the TEC site. First you generate a triangle waveform as shown in the diagram below.
You compare this with a d.c voltage, which you adjust to control the ratio of on to off time that
you require. When the triangle is above the 'demand' voltage, the output goes high. When the
triangle is below the demand voltage, the

When the demand speed it in the middle (A) you get a 50:50 output, as in black. Half the
time the output is high and half the time it is low. Fortunately, there is an IC (Integrated circuit)
called a comparator: these come usually 4 sections in a single package.

One can be used as the oscillator to produce the triangular waveform and another to do
the comparing, so a complete oscillator and modulator can be done with half an IC and maybe 7
other bits.
The triangle waveform, which has approximately equal rise and fall slopes, is one of the
commonest used, but you can use a saw tooth (where the voltage falls quickly and rinses slowly).
You could use other waveforms and the exact linearity (how good the rise and fall are) is not too
important.
Traditional solenoid driver electronics rely on linear control, which is the application of a
constant voltage across a resistance to produce an output current that is directly proportional to
the voltage. Feedback can be used to achieve an output that matches exactly the control signal.
However, this scheme dissipates a lot of power as heat, and it is therefore very inefficient.
A more efficient technique employs pulse width modulation (PWM) to produce the
constant current through the coil. A PWM signal is not constant. Rather, the signal is on for part
of its period, and off for the rest. The duty cycle, D, refers to the percentage of the period for
which the signal is on. The duty cycle can be anywhere from 0, the signal is always off, to 1,
where the signal is constantly on. A 50% D results in a perfect square wave. (Figure 1)

A solenoid is a length of wire wound in a coil. Because of this configuration, the solenoid
has, in addition to its resistance, R, a certain inductance, L. When a voltage, V, is applied across
an inductive element, the current, I, produced in that element does not jump up to its constant
value, but gradually rises to its maximum over a period of time called the rise time (Figure 2).
Conversely, I does not disappear instantaneously, even if V is removed abruptly, but decreases
back to zero in the same amount of time as the rise time.

Therefore, when a low frequency PWM voltage is applied across a solenoid, the current
through it will be increasing and decreasing as V turns on and off. If D is shorter than the rise
time, I will never achieve its maximum value, and will be discontinuous since it will go back to
zero during Vs off period (Figure 3).* In contrast, if D is larger than the rise time, I will never
fall back to zero, so it will be continuous, and have a DC average value. The current will not be
constant, however, but will have a ripple (Figure 4).

At high frequencies, V turns on and off very quickly, regardless of D, such that the
current does not have time to decrease very far before the voltage is turned back on. The
resulting current through the solenoid is therefore considered to be constant. By adjusting the D,
the amount of output current can be controlled. With a small D, the current will not have much
time to rise before the high frequency PWM voltage takes effect and the current stays constant.
With a large D, the current will be able to rise higher before it becomes constant.

Dither
Static friction, stiction, and hysteresis can cause the control of a hydraulic valve to be
erratic and unpredictable. Stiction can prevent the valve spool from moving with small input
changes, and hysteresis can cause the shift to be different for the same input signal. In order to
counteract the effects of stiction and hysteresis, small vibrations about the desired position are
created in the spool. This constantly breaks the static friction ensuring that it will move even with
small input changes, and the effects of hysteresis are average out.
Dither is a small ripple in the solenoid current that causes the desired vibration and there
by increases the linearity of the valve. The amplitude and frequency of the dither must be
carefully chosen. The amplitude must be large enough and the frequency slow enough that the
spool will respond, yet they must also be small and fast enough not to result in a pulsating
output.
The optimum dither must be chosen such that the problems of stiction and hysteresis are
overcome without new problems being created. Dither in the output current is a byproduct of low
frequency PWM, as seen above. However, the frequency and amplitude of the dither will be a
function of the duty cycle, which is also used to set the output current level. This means that low
frequency dither is not independent of current magnitude. The advantage of using high frequency
PWM is that dither can be generated separately, and then superimposed on top of the output
current.

This allows the user to independently set the current magnitude (by adjusting the D), as
well as the dither frequency and amplitude. The optimum dither, as set by the user, will therefore
be constant at all current levels.
Why the PWM frequency is important:
The PWM is a large amplitude digital signal that swings from one voltage extreme to the
other. And, this wide voltage swing takes a lot of filtering to smooth out. When the PWM
frequency is close to the frequency of the waveform that you are generating, then any PWM
filter will also smooth out your generated waveform and drastically reduce its amplitude. So, a
good rule of thumb is to keep the PWM frequency much higher than the frequency of any
waveform you generate.
Finally, filtering pulses is not just about the pulse frequency but about the duty cycle and
how much energy is in the pulse. The same filter will do better on a low or high duty cycle pulse
compared to a 50% duty cycle pulse. Because the wider pulse has more time to integrate to a
stable filter voltage and the smaller pulse has less time to disturb it the inspiration was a request
to control the speed of a large positive displacement fuel pump. The pump was sized to allow full
power of a boosted engine in excess of 600 Hp.
At idle or highway cruise, this same engine needs far less fuel yet the pump still normally
supplies the same amount of fuel. As a result the fuel gets recycled back to the fuel tank,
unnecessarily heating the fuel. This PWM controller circuit is intended to run the pump at a low
speed setting during low power and allow full pump speed when needed at high engine power
levels.
Motor Speed Control (Power Control)
Typically when most of us think about controlling the speed of a DC motor we think of
varying the voltage to the motor. This is normally done with a variable resistor and provides a
limited useful range of operation. The operational range is limited for most applications
primarily because torque drops off faster than the voltage drops.

Most DC motors cannot effectively operate with a very low voltage. This method also
causes overheating of the coils and eventual failure of the motor if operated too slowly. Of
course, DC motors have had speed controllers based on varying voltage for years, but the range
of low speed operation had to stay above the failure zone described above.
Additionally, the controlling resistors are large and dissipate a large percentage of energy
in the form of heat. With the advent of solid state electronics in the 1950s and 1960s and this
technology becoming very affordable in the 1970s & 80s the use of pulse width modulation
(PWM) became much more practical. The basic concept is to keep the voltage at the full value
and simply vary the amount of time the voltage is applied to the motor windings. Most PWM
circuits use large transistors to simply allow power On & Off, like a very fast switch.
This sends a steady frequency of pulses into the motor windings. When full power is
needed one pulse ends just as the next pulse begins, 100% modulation. At lower power settings
the pulses are of shorter duration. When the pulse is On as long as it is Off, the motor is
operating at 50% modulation. Several advantages of PWM are efficiency, wider operational
range and longer lived motors. All of these advantages result from keeping the voltage at full
scale resulting in current being limited to a safe limit for the windings.
PWM allows a very linear response in motor torque even down to low PWM% without
causing damage to the motor. Most motor manufacturers recommend PWM control rather than
the older voltage control method. PWM controllers can be operated at a wide range of
frequencies. In theory very high frequencies (greater than 20 kHz) will be less efficient than
lower frequencies (as low as 100 Hz) because of switching losses.
The large transistors used for this On/Off activity have resistance when flowing current, a
loss that exists at any frequency. These transistors also have a loss every time they turn on and
every time they turn off. So at very high frequencies, the turn on/off losses become much
more significant. For our purposes the circuit as designed is running at 526 Hz. Somewhat of an
arbitrary frequency, it works fine.

Depending on the motor used, there can be a hum from the motor at lower PWM%. If
objectionable the frequency can be changed to a much higher frequency above our normal
hearing level (>20,000Hz) .
PWM Controller Features:
This controller offers a basic Hi Speed and Low Speed setting and has the option to
use a Progressive increase between Low and Hi speed. Low Speed is set with a trim pot inside
the controller box. Normally when installing the controller, this speed will be set depending on
the minimum speed/load needed for the motor. Normally the controller keeps the motor at this
Lo Speed except when Progressive is used and when Hi Speed is commanded (see below). Low
Speed can vary anywhere from 0% PWM to 100%.
Progressive control is commanded by a 0-5 volt input signal. This starts to increase PWM
% from the low speed setting as the 0-5 volt signal climbs. This signal can be generated from a
throttle position sensor, a Mass Air Flow sensor, a Manifold Absolute Pressure sensor or any
other way the user wants to create a 0-5 volt signal. This function could be set to increase fuel
pump power as turbo boost starts to climb (MAP sensor). Or, if controlling a water injection
pump, Low Speed could be set at zero PWM% and as the TPS signal climbs it could increase
PWM%, effectively increasing water flow to the engine as engine load increases. This controller
could even be used as a secondary injector driver (several injectors could be driven in a batch
mode, hi impedance only), with Progressive control (0-100%) you could control their output for
fuel or water with the 0-5 volt signal.
Progressive control adds enormous flexibility to the use of this controller. Hi Speed is
that same as hard wiring the motor to a steady 12 volt DC source. The controller is providing
100% PWM, steady 12 volt DC power. Hi Speed is selected three different ways on this
controller: 1) Hi Speed is automatically selected for about one second when power goes on. This
gives the motor full torque at the start. If needed this time can be increased ( the value of C1
would need to be increased). 2) High Speed can also be selected by applying 12 volts to the High
Speed signal wire. This gives Hi Speed regardless of the Progressive signal.

When the Progressive signal gets to approximately 4.5 volts, the circuit achieves 100%
PWM Hi Speed.
How does this technology help:
The benefits noted above are technology driven. The more important question is how the PWM
technology jumping from a 1970s technology into the new millennium offers:
Longer battery life:
reducing the costs of the solar system
reducing battery disposal problems
More battery reserve capacity:
increasing the reliability of the solar system
reducing load disconnects
Opportunity to reduce battery size to lower the system cost
Greater user satisfaction:
get more power when you need it for less money!!

REACTIVE POWER

DEFINITION
Reactive power is a concept used by engineers to describe the background energy movement in
an Alternating Current (AC) system arising from the production of electric and magnetic fields.
These fields store energy which changes through each AC cycle. Devices which store energy by
virtue of a magnetic field produced by a flow of current are said to absorb reactive power; those
which store energy by virtue of electric fields are said to generate reactive power.
Power flows, both actual and potential, must be carefully controlled for a power system
to operate within acceptable voltage limits. Reactive power flows can give rise to substantial
voltage changes across the system, which means that it is necessary to maintain reactive power
balances between sources of generation and points of demand on a 'zonal basis'. Unlike system
frequency, which is consistent throughout an interconnected system, voltages experienced at
points across the system form a "voltage profile" which is uniquely related to local generation
and demand at that instant, and is also affected by the prevailing system network arrangements.
National Grid is obliged to secure the transmission network to closely defined voltage and
stability criteria. This is predominantly achieved through circuit arrangements, transformers and
shunt or static compensation.
SOURCES OF REACTIVE:
Most equipment connected to the electricity system will generate or absorb reactive
power, but not all can be used economically to control voltage. Principally synchronous
generators and specialized compensation equipment are used to set the voltage at particular
points in the System, which elsewhere is determined by the reactive power flows.
Synchronous Generators:
Synchronous machines can be made to generate or absorb reactive power depending upon
the excitation (a form of generator control) applied. The output of synchronous machines is
continuously variable over the operating range and automatic voltage regulators can be used to
control the output so as to maintain a constant system voltage.
Synchronous Compensators:

Certain smaller generators, once run up to speed and synchronized to the system, can be
declutched from their turbine and provide reactive power without producing real power. This
mode of operation is called Synchronous Compensation.
Capacitive and Inductive Compensators:
These are devices that can be connected to the system to adjust voltage levels. A
capacitive compensator produces an electric field thereby generating reactive power whilst an
inductive compensator produces a magnetic field to absorb reactive power. Compensation
devices are available as either capacitive or inductive alone or as a hybrid to provide both
generation and absorption of reactive power.
Overhead Lines and Underground Cables:
Overhead lines and underground cables, when operating at the normal system voltage,
both produce strong electric fields and so generate reactive power. When current flows through a
line or cable it produces a magnetic field which absorbs reactive power. A lightly loaded
overhead line is a net generator of reactive power whilst a heavily loaded line is a net absorber of
reactive power. In the case of cables designed for use at 275 or 400kV the reactive power
generated by the electric field is always greater than the reactive power absorbed by the magnetic
field and so cables are always net generators of reactive power.
Transformers:
Transformers produce magnetic fields and therefore absorb reactive power. The heavier
the current loading the higher the absorption.
Consumer Loads:
Some loads such as motors produce a magnetic field and therefore absorb reactive power
but other customer loads, such as fluorescent lighting, generate reactive power. In addition
reactive power may be generated or absorbed by the lines and cables of distribution systems.

A PHYSICAL ANALOGY FOR REACTIVE POWER:

While there are numerous physical analogies for this quantity called reactive power, one
that is reasonably accurate is the process of filling a water tower tank with water - one bucket at
a time. Suppose you want to fill a water tower tank with water, and the only way that you can do
that is by climbing up a ladder carrying a bucket of water and then dumping the water into the
tank. You then have to go back down the ladder to get more water. Strictly speaking, if you
simply go up a ladder (not carrying anything) and come back down (not carrying anything), you
have not done any work in the process. But, since it did take work to go up the ladder, you must
have gotten all that energy back when you came down. While you may not feel that coming
down the ladder completely restores you to the condition you were in before you went up,
ideally, from an energy conversion viewpoint, you should! If you dont agree, get out your
physics book and check out the official definition of doing work.
OK, if you still dont agree that walking up a ladder and coming back down does not
require any net work, then think of it this way. Would you pay anyone to walk up a ladder and
back down without doing anything at the top? Probably not. But, if they dumped a bucket of
water in the tank while they were at the top, then that would be something worth paying for.
When you carry a bucket of water up the ladder you do a certain amount of work. If you dump
the water at the top and carry an empty bucket down, then you have not gotten all your energy
back (because your total weight coming down is less than going up), and you have done work
during that process. The energy that it takes to go up and down a ladder carrying nothing either
way requires reactive power, but no real power. The energy that it takes to go up a ladder
carrying something and come down without carrying anything requires both real power and
reactive power.
A reminder here is that power is the time rate of energy consumption, so consuming 500
Watts of real power for 30 minutes uses 250 Watt-hours of energy (or 0.25 kilowatt- hours which
costs about 2.5 cents to generate in the U.S.). The analogy is that voltage in an AC electrical
system is like the person going up and down the ladder. The movement of the water up the ladder
and then down into the tank is like the current in an AC electrical system.
Now, this pulsating power is not good in an electrical system because it causes pulsations
on the shafts of motors and generators which can fatigue them. So, the answer to this pulsation
problem is to have three ladders going up to the water tower and have three people climb up in

sequence (the first person on the first ladder, then the second person on the second ladder, then
the third person on the third ladder) such that there is always a steady stream of water going into
the tank. While the power required from each person is pulsating, the total result of all three
working together in perfect balanced, symmetrical sequence results in a constant flow of water
into the tank this is why we use 3-phase electrical systems where voltages go up and down in
sequence (first A phase, then B phase, and finally C phase).
In AC electrical systems, this sequential up/down pulsation of power in each line is the
heart of the transmission of electrical energy. As in the water tower analogy, having plenty of
water at ground level will not help you if you cannot get it up into the tower. While you may
certainly be strong enough to carry the bucket, you cannot get it there without the ladder. In
contrast, there may be a ladder, but you may not be strong enough to carry the water. However,
the people do take up room around the water tower and limit how much water can go up and
down over a period of time - just as reactive power flow in an electrical system requires a larger
current which limits how much real power can be transmitted.
To make the system more reliable, we might put two sets of three ladders leading up to
the tank on the tower. Then, if one set fails (maybe the water plus the person get too heavy and
the ladder breaks), the other set picks up the slack (that is, has to carry more water). But, this
could eventually overload the second set so that it too fails. This is a cascading outage due to the
overloading of ladders.

SERVICE:
Grid Code Requirements

All BM Units must be capable of supplying their rated power output (MW) at any point
between the limits 0.85 power factor lagging and 0.95 power factor leading at the BM Unit
terminals. Also the short circuit ratio of the BM Unit must not be less than 0.5. The reactive
power output under steady state conditions should be fully available within the voltage range
5% at 400kV, 275kV, 132kV and lower voltages and must have a continuously acting automatic
excitation control system to provide constant terminal voltage control of the BM Unit without
instability over the entire operating range of the BM Unit.
Reactive Power Market Arrangements
Reactive power is procured via the Reactive Power Market, the arrangements of which
are enshrined in Schedule 3 to the Connection and Use of System Code (CUSC). The mechanism
enables National Grid to invite tenders for alternative payment arrangements for the reactive
capability as required by the Grid Code and tenders for the provision of an Enhanced Reactive
Power Service (ERPS).
The two main components are:

Market Agreements whereby Generators and National Grid can enter into a market based
contract on mutually agreed terms. The agreements can cover the Obligatory Reactive
Power Service (ORPS) and/or the Enhanced Reactive Power Service (ERPS); and

Default Arrangements whereby, in the absence of a market agreement, payment is made


to generators for reactive utilization. In accordance with the provisions of CUSC
Schedule 3, all relevant Generators with BM Units have amended Ancillary Services
Agreements to incorporate with respect to those BM Units the default payment
arrangements for the Obligatory Reactive Power Service as more particularly described
in CUSC Schedule 3. Relevant Generators with BM Units not operational but wishing to
respond to this Invitation to Tender are required to amend or conclude Ancillary Services
Agreements in similar fashion in accordance with CUSC Schedule 3 before a Market
Agreement can be entered into.

Obligatory Reactive Power Service:


The Obligatory Reactive Power Service is an Ancillary Service with two essential components

The provision of a minimum Reactive Power capability and the making available of that
capability to National Grid.
The capability component of the Obligatory Reactive Power Service is the minimum
Reactive Power capability required of a BM Unit under and in accordance with the Connection
Conditions of the Grid Code, most particularly CC6.3.2. A User therefore does not provide the
Obligatory Reactive Power Service from a Generating Unit which is compliant with Grid Code
CC6.3.2 where compliance is not obligatory for that User in respect of that Generating Unit.
The second component of the Obligatory Reactive Power Service - the making available
of the capability to National Grid to instruct - is typically provided by BM Units in accordance
with the Balancing Codes of the Grid Code. However, it may be provided by other Plant,
specifically Small Independent Generating Plant, where the User and National Grid agree terms
for the provision of suitable metering and communication facilities, including the ability for
National Grid to obtain relevant technical, planning and other data.
The Obligatory Reactive Power Service does not include the provision of Reactive Power
capability from Synchronous Compensation or from static compensation equipment.
The Obligatory Reactive Power Service is more particularly described in subparagraph 1.1 of
CUSC Schedule 3.

MODELING OF CASE STUDY

INTERDEPENDENCE OF THE REACTIVE POWER UNDER CONVENTIONAL


CONTROL:
PWM provides fully independent controllability of the converter voltages (and therefore
reactive power transfers) on both sides of the link. This capability is not available to multilevel
configurations under the present control strategies. For instance, if extra reactive power is needed
at the receiving end to maintain the ac terminal voltage constant, the firing angle

is increased

and, therefore, the dc voltage reduced. To continue transmitting the specified power under this
condition, the sending end station must also reduce its dc voltage. The dc voltage reduction is
implemented by a corresponding increase in the firing angle of the two converter groups; this
action will force an unwanted extra injection of reactive power and, thus, an increase of ac
terminal voltage at this end. Such condition would not occur if some PWM control were to be
added to the multilevel configurations. However the use of PWM is currently limited to three
levels and is only used in voltage source conversion schemes.
MULTIGROUP FIRING-SHIFT CONCEPT:
The exchange of reactive power between the converter and ac system is determined by
the sine of the firing angle

. Altering

has an immediate effect on the dc voltage level

and, thus, to maintain the specified dc power transfer through the link, a corresponding change of
firing angle must be made at the other end, which in turn affects its reactive power exchange
with the ac system. Therefore, under conventional converter control, the reactive powers injected
at the two ends of a multilevel CSC link are interdependent.
In multilevel CSC HVDC interconnections with two twelve pulse groups per terminal
(such as shown in Fig. 1) the same current waveform is produced by each of the 12-pulse
converter groups, and thus the total output current waveform remains the same if a phase-shift is
introduced between the firings of the two groups constituting the converter station.

Fig.1. Simplified diagram of a dc link connecting two ac systems.


When a change of operating conditions at the receiving end demands more reactive power from
the converter, and thus reduces the dc voltage, shifting the firings of the two sending end
converter groups in opposite directions provides the required dc voltage reduction, while
maintaining the reactive power constant (due to the opposite polarity of the two firing angle
corrections). A relatively small change of active power will be caused by the variation of the
fundamental current produced by the shift, but this change can be compensated for by a small
extra correction of the two firing angles. For a converter to operate in the firing-shift mode
(which in the above example is the sending end converter), the firing instants of one group (say
group A) is kept on the positive side (thus providing reactive power), while the second group
(say group B) may act as a source or sink of reactive power (i.e., the firing angle may be positive
or negative).

A. Steady State Operation:


To simplify the explanation of the steady state characteristics only, the sending end
operates under firing shift control, while the receiving end uses a common firing angle for the
two converter groups. The generalized method with firing shift at both ends will be used in the
dynamic simulation.

Also, as shown in Fig. 1, the interconnected ac systems are represented as simplified Thevenin
equivalents, i.e.

terminal ac voltage
power transfer

and

At the receiving end the control specifications are the

and the dc voltage


and the reactive power

, while those at the sending end are the dc


.

Receiving end:
As at this end no firing shift control is exercised, the firing angle will be the same for the
two converter groups

. The dotted lines in the phasor diagram of Fig. 2

represent the initial operating condition (with a Thevenin impedance of


line a new operating condition with a larger Thevenin impedance

) and the continuous


.

Fig.2.Operating conditions at the receiving end for two different system strengths.

In both cases, the system reactive power requirements are equally shared between the
converter and the ac source with the same firing angle used in both converter groups. The
condition is represented by the following equations:
. (1)
Or between the double converter ac and dc currents

. (1a)
And

(2)

. (3)
In terms of the specified power (which is normally controlled at the sending end) the dc voltages
across the link are related by the expression

Or making

the subject and taking the positive root

. (4)
The solution of (1a)(4) provides the initial values of

Sending end:
Fig. 3 illustrates the two operating conditions in response to the change at the receiving end.
Initially the sending end is set with one firing angle positive and one negative to demonstrate the
phase shift control principle, these are represented by

and

. When a change in

operating conditions occurs, as a result of an increased Thevenin impedance and, thus, of firing
angle at the receiving end, the sending end must compensate by an increase in firing angle (
and

in Fig. 3) to maintain the specified active power transfer.

Fig.3. Firing shift control maintaining constant reactive power at the sending end for two
different system strengths.
The following relationships apply to the sending end:

. (5)
(6)
and shown in the equation at the bottom of the page or, because

due to the series

connection
(7)
The active power

is equal to the specified dc power


.. (8)

And the reactive power


. (9)
With

, and

specified, the unknown variables are

which

can be derived from the simultaneous solution of (4)(9).


In the steady state, the values of

, and, thus, the internal reactive power

circulation between the two converter groups, can be reduced by the use of the transformer on
load tap change.

CONTROL STRUCTURE:
For complete flexibility the sending end needs to control real and reactive power and the
receiving end keep the converter dc voltage constant (so as to minimize dc current for a given
real power setting) and control the reactive power. With reactive power control at both ends, the

controllers can easily be configured for optimum power transfer at the system level depending on
operating objectives, which usually involves providing constant power factor at the sending end
and constant ac terminal voltage at the receiving end.
In order to control the real and reactive power over the complete operating range the
converter response needs to be linear. Standard PID controllers are unsuitable for this application
as their gain is static, and although they may give suitable performance over a narrow band, the
latter is not acceptable over the complete range. This is explained in more detail later. Fig. 4
illustrates the control ranges of the real and reactive power responses for a values of

. It is

clear that these controller surfaces are very nonlinear, and it is not hard to understand why a
linear PID controller would be unsuitable.

Fig.4. Calculated real and reactive power for varied firing angle

Given the aforementioned controller surfaces, it is difficult to visualize how the controller
must perform, especially since the controller firing angles are expected to operate equally well in
the positive and negative regions. What is needed is a controller that operates for all
combinations of

and

without the need to manually switch controller gains and control

actions. An example of four controller operating conditions is shown in Fig. 5.

Fig.5. Firing shift control providing (a) large P and Q, (b) large Q and smaller P, (c) small P and
Q, and (d) large P, no Q.
These diagrams show that the controller is expected to operate over a wide range of
conditions and that the change in firing angle has the greatest influence on the real power near
the X axis and on the reactive power near the Y axis. This is better explained by examining the
real and reactive power contribution of one converter in isolation. The real power transferred by
the converter depends on the cosine of the firing angle
the sine of

while the reactive power depends on

. What is of interest to control system designers is the rate of change of the

controlled outputs

and

, as this determines the level of gain (or sensitivity) in system

response. Basic differentiation reveals that the rate of change is proportional to


power and to

for real

for reactive power, which makes this system very nonlinear.

As mentioned earlier, conventional controller operation is confined to a relatively small


range and functions with a fixed gain, thereby assuming that the system is linear over the small
range. This control philosophy becomes even less suitable when we consider that an ideal
independent and fully flexible controller should be able to provide a combination of
that satisfies the requirements of both

and

simultaneously.

and

Fig. 6 illustrates a simplified block diagram of what the controller must achieve, the goal
being a mapping function that translates

and

into

and

, to make the nonlinear

converter appear linear. In doing this, then linear control theory may be used successfully.

Fig.6. Block diagram of the nonlinear system control objective.


The only information representing the behavior of the converter system is given by the
steady state (8) and (9), but this is sufficient initially, because they show the influence that
and

have on the output variables

and

The rate of change of the output variables with respect to firing angles
expressed as
. (10)
. (11)
These are found by differentiating the steady state (8) and (9), i.e.
(12)
. (13)
(14)
. (15)
Expressing the dynamic system in matrix form

and

can be

. (16)
Equation (16) can be solved using basic matrix theory.
Using the partial differentials of the steady state equations to P and Q in Matrix A, it is
possible to model the converter systems transient response (but not the system state). If the
matrix is nonsingular, its inverse can be used to linearize the converter system behavior. The
inverse of Matrix A, with the common gain component grouped on the left side, becomes

.. (17)
This equation indicates that the overall system gain depends on the difference between
the two firing angles

and the contribution of (for P) real power on the other

groups firing angle, and (for Q) reactive power contribution on the other groups firing angle.
While making sense in theory, this needs to be realized in practice.
Examining the system on an incremental basis (i.e., from
difference

), as the

is reduced, the accuracy is increased, becoming very close to the continuous

integral equivalent. It could be argued that in each partial differential equation, the effect of
on

and vice versa is not fully captured, but in a practical system this effect can be

minimized with suitable feedback


A. PRACTICAL IMPLEMENTATION:
Figs. 7 and 8 show the implementation of the theory into a real system controller.
In Fig. 7, the controller has two separate channels, one for each of the

and

components. For each channel, the theory is the same; the error is calculated by subtracting the
measured power from the power order, and this is fed into the PID controller. The increment of
and
increment of

becomes the input into the nonlinear mapping function, which resolves the
, and

from the and channels, respectively.

The nonlinear errors are combined and then


outputs

and

and

are integrated to provide the required

as inputs into the converter firing logic.

Fig.7. Implementation of nonlinear control theory.


The nonlinear mapping function in Fig. 7 for P is represented by

, and for Q is

, in (17).

Fig. 8. Sending end controller block diagram, with the main linearizing components in (a) and
the common angle difference calculation in (b).
Fig. 8 shows how the system is realized in a practical controller. The controller layout
follows almost exactly the analytical development from (10)to (17), with only additional low
pass filters added to prevent ringing when the error is almost zero. It is important to note that the
common component of the converter control is calculated separately [Fig. 8(b)], since this
determines the overall gain of the system. Hard limits on the calculation are provided so as to

prevent wind up and instability which can occur if


is always greater than

. Also to ensure that firing angle

, limits are placed on the integrators. The receiving end controller

topology is much the same as that of the sending end, but as it must control

and

, the

layout is different. Using steady state (5) and (9), the inverse transfer function becomes

(18)
Given the steady-state equations and taking into consideration (6), it becomes apparent
that although full control is justified by the theory, the range of Q control depends on the
magnitude of

. Optimum dc power transmission occurs when the dc current is minimized, as

this also minimizes the dc link power losses; however this affects the range of

controllability

at both the sending and receiving ends. As the reactive power circulation is confined to the ac
system side, the magnitude of the ac current in each converter group determines the level of
reactive power controllability in the ac system. The real power, which is also a function of the ac
current magnitude, is determined by the combination of

and

on the dc link. To

understand the reactive power controllability limits, it must be realized that the same amount of
real power can be transferred with a combination high

/low

, or low

/high

An example of this in a multi group MLCR, in per unit terms is given

(19)
. (20)
And so with

and

being the same in both cases, (9) shows that (20) would

yield twice the reactive power for a given firing angle as (19).
So with conflicting objectives in real power efficiency and reactive power controllability,
a compromise must be made between control range and overall efficiency during system design.

MATLAB
Mat lab is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment where problems and
solutions are expressed in familiar mathematical notation. Typical uses include Math and
computation Algorithm development Data acquisition Modeling, simulation, and prototyping
Data analysis, exploration, and visualization Scientific and engineering graphics Application
development, including graphical user interface building.
Mat lab is an interactive system whose basic data element is an array that does not require
dimensioning. This allows you to solve many technical computing problems, especially those
with matrix and vector formulations, in a fraction of the time it would take to write a program in
a scalar no interactive language such as C or Fortran.
The name mat lab stands for matrix laboratory. Mat lab was originally written to provide
easy access to matrix software developed by the LINPACK and EISPACK projects. Today, mat
lab engines incorporate the LAPACK and blas libraries, embedding the state of the art in
software for matrix computation.
Mat lab has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses in
mathematics, engineering, and science. In industry, mat lab is the tool of choice for highproductivity research, development, and analysis.
Mat lab features a family of add-on application-specific solutions called toolboxes. Very
important to most users of mat lab, toolboxes allow you to learn and apply specialized
technology. Toolboxes are comprehensive collections of mat lab functions (M-files) that extend
the mat lab environment to solve particular classes of problems. Areas in which toolboxes are

available include signal processing, control systems, neural networks, fuzzy logic, wavelets,
simulation, and many others.

The mat lab system consists of five main parts:


Development Environment. This is the set of tools and facilities that help you use mat lab
functions and files. Many of these tools are graphical user interfaces. It includes the mat lab
desktop and Command Window, a command history, an editor and debugger, and browsers for
viewing help, the workspace, files, and the search path.
The mat lab Mathematical Function Library. This is a vast collection of computational
algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to
more sophisticated functions like matrix inverse, matrix EIGEN values, Bessel functions, and
fast Fourier transforms.
The mat lab Language. This is a high-level matrix/array language with control flow
statements, functions, data structures, input/output, and object-oriented programming features. It
allows both "programming in the small" to rapidly create quick and dirty throw-away programs,
and "programming in the large" to create large and complex application programs.
Mat lab has extensive facilities for displaying vectors and matrices as graphs, as well as
annotating and printing these graphs. It includes high-level functions for two-dimensional and
three-dimensional data visualization, image processing, animation, and presentation graphics. It
also includes low-level functions that allow you to fully customize the appearance of graphics as
well as to build complete graphical user interfaces on your mat lab applications.
The mat lab Application Program Interface (API). This is a library that allows you to
write C and Fortran programs that interact with mat lab. It includes facilities for calling routines
from mat lab (dynamic linking), calling mat lab as a computational engine, and for reading and
writing MAT-files.

SIMULINK:
INTRODUCTION:
Simulink is a software add-on to mat lab which is a mathematical tool developed by The
Math works,(http://www.mathworks.com) a company based in Natick. Mat lab is powered by
extensive numerical analysis capability.
Simulink is a tool used to visually program a dynamic system (those governed by
Differential equations) and look at results. Any logic circuit, or control system for a dynamic
system can be built by using standard building blocks available in Simulink Libraries.
Various toolboxes for different techniques, such as Fuzzy Logic, Neural Networks, dsp,
Statistics etc. are available with Simulink, which enhance the processing power of the tool. The
main advantage is the availability of templates / building blocks, which avoid the necessity of
typing code for small mathematical processes.
CONCEPT OF SIGNAL AND LOGIC FLOW:
In Simulink, data/information from various blocks are sent to another block by lines
connecting the relevant blocks. Signals can be generated and fed into blocks dynamic /
static).Data can be fed into functions. Data can then be dumped into sinks, which could be
scopes, displays or could be saved to a file. Data can be connected from one block to another,
can be branched, multiplexed etc. In simulation, data is processed and transferred only at discrete
times, since all computers are discrete systems. Thus, a simulation time step (otherwise called an
integration time step) is essential, and the selection of that step is determined by the fastest
dynamics in the simulated system.

Fig. Simulink library browser

CONNECTING BLOCKS:

fig Connectung blocks


To connect blocks, left-click and drag the mouse from the output of one block to the input
of another block.

SOURCES AND SINKS:


The sources library contains the sources of data/signals that one would use in a dynamic
system simulation. One may want to use a constant input, a sinusoidal wave, a step, a repeating
sequence such as a pulse train, a ramp etc. One may want to test disturbance effects, and can use
the random signal generator to simulate noise. The clock may be used to create a time index for
plotting purposes. The ground could be used to connect to any unused port, to avoid warning
messages indicating unconnected ports.
The sinks are blocks where signals are terminated or ultimately used. In most cases, we
would want to store the resulting data in a file, or a matrix of variables. The data could be
displayed or even stored to a file. the stop block could be used to stop the simulation if the input
to that block (the signal being sunk) is non-zero. Figure 3 shows the available blocks in the
sources and sinks libraries. Unused signals must be terminated, to prevent warnings about
unconnected signals.

Fig. Sources and sinks

CONTINUOUS AND DISCRETE SYSTEMS:


All dynamic systems can be analyzed as continuous or discrete time systems. Simulink
allows you to represent these systems using transfer functions, integration blocks, delay blocks
etc.

Fig. continous and descrete systems

NON-LINEAR OPERATORS:
A main advantage of using tools such as Simulink is the ability to simulate non-linear
systems and arrive at results without having to solve analytically. It is very difficult to arrive at
an analytical solution for a system having non-linearities such as saturation, signup function,
limited slew rates etc. In Simulation, since systems are analyzed using iterations, non-linearities
are not a hindrance. One such could be a saturation block, to indicate a physical limitation on a
parameter, such as a voltage signal to a motor etc. Manual switches are useful when trying
simulations with different cases. Switches are the logical equivalent of if-then statements in
programming.

Fig. simulink blocks

MATHEMATICAL OPERATIONS:
Mathematical operators such as products, sum, logical operations such as and, or, etc. can
be programmed along with the signal flow. Matrix multiplication becomes easy with the matrix
gain block. Trigonometric functions such as sin or tan inverse (at an) are also available.
Relational operators such as equal to, greater than etc. can also be used in logic circuits

Fig. Simulink math blocks

SIGNALS & DATA TRANSFER:


In complicated block diagrams, there may arise the need to transfer data from one portion
to another portion of the block. They may be in different subsystems. That signal could be
dumped into a goto block, which is used to send signals from one subsystem to another.
Multiplexing helps us remove clutter due to excessive connectors, and makes
matrix(column/row) visualization easier.

Fig. signals and systems

MAKING SUBSYSTEMS
Drag a subsystem from the Simulink Library Browser and place it in the parent block
where you would like to hide the code. The type of subsystem depends on the purpose of the
block. In general one will use the standard subsystem but other subsystems can be chosen. For
instance, the subsystem can be a triggered block, which is enabled only when a trigger signal is
received.

Open (double click) the subsystem and create input / output PORTS, which transfer
signals into and out of the subsystem. The input and output ports are created by dragging them
from the Sources and Sinks directories respectively. When ports are created in the subsystem,
they automatically create ports on the external (parent) block. This allows for connecting the
appropriate signals from the parent block to the subsystem.
SETTING SIMULATION PARAMETERS:
Running a simulation in the computer always requires a numerical technique to solve a
differential equation. The system can be simulated as a continuous system or a discrete system
based on the blocks inside. The simulation start and stop time can be specified. In case of
variable step size, the smallest and largest step size can be specified. A Fixed step size is
recommended and it allows for indexing time to a precise number of points, thus controlling the
size of the data vector. Simulation step size must be decided based on the dynamics of the
system. A thermal process may warrant a step size of a few seconds, but a DC motor in the
system may be quite fast and may require a step size of a few milliseconds.

MATLAB DESIGN OF CASE STUDY AND RESULTS


Case I:

Real and reactive power order changes at the sending and receiving end

Case II:

Reactive power responses under power factor and terminal voltage control for a series of step
changes to real power.

CONCLUSION
A new type of converter control has been developed, applicable to multilevel HVDC
schemes with two or more 12-pulse groups per terminal. It has been shown theoretically, and
verified by EMTDC simulation using an MLCR configuration, that the use of a controllable shift
between the firings of the series connected converter groups permits independent reactive power
control at the two dc link terminals. This provides four quadrant power controllability to
multilevel current source HVDC transmission and, thus, makes this alternative equally flexible to
PWM-controlled voltage source conversion, without the latters limitations in terms of power
and voltage ratings. It can be expected that MLCR, combined with firing-shift control, should
compete favorably with the conventional current source technology for very large power
applications.

REFERENCES
[1] G. Asplund, K. Eriksson, and O. Tollerz, Land and sea cable interconnections with HVDC
light, presented at the CEPSI Conf., Manila, Philippines, Oct. 2000.
[2] G. Asplund, K. Eriksson, and K. Svensson, DC transmission based on voltage source
converters, presented at the CIGRE SC Colloq., Johannesburg, South Africa, 1977.
[3] Cigre SC-4 WG 37, VSC Transmission Final document. Paris, France, 2004.
[4] G. Jung et al., A general circuit topology of multilevel inverter, in Proc. IEEE Power
Electronics Specialist Conf., 1991, vol. PESC 91 Rec., pp. 96103, Mass. Inst. Technol..
[5] L. B. Perera, N. R. Watson, Y.-H. Liu, and J. Arrillaga, Multi-level current reinjection selfcommutated HVDC converter, Proc. Inst. Elect. Eng., Gen., Transm. Distrib. vol. V152, no. 5,
pp. 607615, Sep. 2005.
[6] J. Arrillaga, Y. H. Liu, L. B. Perera, and N. R. Watson, A current reinjection scheme that
adds self-commutation and pulse multiplication to the thyristor converter, IEEE Trans. Power
Del., vol. 21, no. 3, pp. 15931599, Jul. 2006.

You might also like