Professional Documents
Culture Documents
Latest Technologies
FACTS is defined as "a power electronic based system and other static equipment
that provide control of one or more AC transmission system parameters to enhance
controllability and increase power transfer capability."
Series compensation
In series compensation, the FACTS is connected in series with the power system. It
works as a controllable voltage source. Series inductance occurs in long transmission lines, and
when a large current flow causes a large voltage drop. To compensate, series capacitors are
connected.
Shunt compensation
In shunt compensation, power system is connected in shunt with the FACTS. It works as
a controllable current source. Shunt compensation is of two types:
This method is used improve the power factor. Whenever an inductive load is connected
to the transmission line, power factor lags because of lagging load current. To compensate, a
shunt capacitor is connected which draws current leading the source voltage. The net result is
improvement in power factor.
1
Shunt inductive compensation
This method is used either when charging the transmission line, or, when there is very
low load at the receiving end. Due to very low, or no load -- very low current flows through the
transmission line. Shunt capacitance in the transmission line causes voltage amplification
(Ferranti Effect). The receiving end voltage may become double the sending end voltage
(generally in case of very long transmission lines). To compensate, shunt inductors are connected
across the transmission line.
A high-voltage, direct current (HVDC) electric power transmission system uses direct
current for the bulk transmission of electrical power, in contrast with the more common
alternating current systems. For long-distance distribution, HVDC systems are less expensive
and suffer lower electrical losses. For shorter distances, the higher cost of DC conversion
equipment compared to an AC system may be warranted where other benefits of direct current
links are useful.
The modern form of HVDC transmission uses technology developed extensively in the
1930s in Sweden at ASEA. Early commercial installations included one in the Soviet Union in
1951 between Moscow and Kashira, and a 10-20 MW system between Gotland and mainland
Sweden in 1954. The longest HVDC link in the world is currently the Inga-Shaba 1,700 km
(1,100 mi) 600 MW link connecting the Inga Dam to the Shaba copper mine, in the Democratic
Republic of Congo. High Voltage Direct Current solutions have become more desirable for the
following reasons:
E nvironmental advantages
Economical (cheapest solution)
Asynchronous interconnections
Power flow control
Added benefits to the transmission (stability, power quality etc.)
Natural commutated converters are most used in the HVDC systems as of today. The
component that enables this conversion process is the thyristor, which is a controllable
semiconductor that can carry very high currents (4000 A) and is able to block very high voltages
(up to 10 kV). By means of connecting the thyristors in series it is possible to build up a
thyristor valve, which is able to operate at very high voltages (several hundred of kV).The
2
thyristor valve is operated at net frequency (50 hz or 60 hz) and by means of a control angle it is
possible to change the DC voltage level of the bridge. This ability is the way by which the
transmitted power is controlled rapidly and efficiently.
• Undersea cables, where high capacitance causes additional AC losses. (e.g., 250 km
Baltic Cable between Sweden and Germany)
• Endpoint-to-endpoint long-haul bulk power transmission without intermediate 'taps', for
example, in remote areas
• Increasing the capacity of an existing power grid in situations where additional wires are
difficult or expensive to install
• Power transmission and stabilization between unsynchronised AC distribution systems
• Connecting a remote generating plant to the distribution grid, for example Nelson River
Bipole
• Stabilizing a predominantly AC power-grid, without increasing prospective short circuit
current
• Reducing line cost. HVDC needs fewer conductors as there is no need to support multiple
phases. Also, thinner conductors can be used since HVDC does not suffer from the skin
effect
• Facilitate power transmission between different countries that use AC at differing
voltages and/or frequencies
• Synchronize AC produced by renewable energy sources
3
• HVDC can carry more power per conductor, because for a given power rating the
constant voltage in a DC line is lower than the peak voltage in an AC line. In AC power,
the root mean square (RMS) voltage measurement is considered the standard, but RMS is
only about 71% of the peak voltage. The peak voltage of AC determines the actual
insulation thickness and conductor spacing. Because DC operates at a constant maximum
voltage without RMS, this allows existing transmission line corridors with equally sized
conductors and insulation to carry 100% more power into an area of high power
consumption than AC, which can lower costs.
EMBEDDED SYSTEM
In general, "embedded system" is not an exactly defined term, as many systems have
some element of programmability. For example, Handheld computers share some elements with
embedded systems — such as the operating systems and microprocessors which power them —
but are not truly embedded systems, because they allow different applications to be loaded and
peripherals to be connected.
Debugging tools are another issue. Since you can't always run general programs on
your embedded processor, you can't always run a debugger on it. This makes fixing your
program difficult. Special hardware such as JTAG ports can overcome this issue in part.
However, if you stop on a breakpoint when your system is controlling real world hardware (such
4
as a motor), permanent equipment damage can occur. As a result, people doing embedded
programming quickly become masters at using serial IO channels and error message style
debugging.
To save costs, embedded systems frequently have the cheapest processors that can do
the job. This means your programs need to be written as efficiently as possible. When dealing
with large data sets, issues like memory cache misses that never matter in PC programming can
hurt you. Luckily, this won't happen too often- use reasonably efficient algorithms to start, and
optimize only when necessary. Of course, normal profilers won't work well, due to the same
reason debuggers don't work well. So more intuition and an understanding of your software and
hardware architecture is necessary to optimize effectively.
Memory is also an issue. For the same cost savings reasons, embedded systems
usually have the least memory they can get away with. That means their algorithms must be
memory efficient (unlike in PC programs, you will frequently sacrifice processor time for
memory, rather than the reverse). It also means you can't afford to leak memory. Embedded
applications generally use deterministic memory techniques and avoid the default "new" and
"malloc" functions, so that leaks can be found and eliminated more easily.
Other resources programmers expect may not even exist. For example, most
embedded processors do not have hardware FPUs (Floating-Point Processing Unit). These
resources either need to be emulated in software, or avoided altogether.
Embedded systems frequently control hardware, and must be able to respond to them in
real time. Failure to do so could cause inaccuracy in measurements, or even damage hardware
such as motors. This is made even more difficult by the lack of resources available. Almost all
embedded systems need to be able to prioritize some tasks over others, and to be able to put
off/skip low priority tasks such as UI in favor of high priority tasks like hardware control
Fixed-Point Arithmetic
Some embedded microprocessors may have an external unit for performing floating
point arithmetic(FPU), but most low-end embedded systems have no FPU. Most C compilers
will provide software floating point support, but this is significantly slower than a hardware FPU.
As a result, many embedded projects enforce a no floating point rule on their programmers. This
is in strong contrast to PCs, where the FPU has been integrated into all the major
microprocessors, and programmers take fast floating point number calculations for granted.
Many DSPs also do not have an FPU and require fixed-point arithmetic to obtain acceptable
performance.
A common technique used to avoid the need for floating point numbers is to change
the magnitude of data stored in your variables so you can utilize fixed point mathematics. For
example, if you are adding inches and only need to be accurate to the hundreth of an inch, you
could store the data as hundreths rather than inches. This allows you to use normal fixed point
arithmetic. This technique works so long as you know the magnitude of data you are adding
ahead of time, and know the accuracy to which you need to store your data.
5
LATEST DISTRIBUTED CONTROL SYSTEM TECHNOLOGY
Distributed control systems are powerful assets for new and modernized power plants. Thanks
to three product generations of technology innovations, these systems now provide new benefits
— including improved O&M efficiency, greater plant design flexibility, and improved process
control and asset reliability — that help competitive plants advance in the game.
With each major system release, many new DCS capabilities and features have been
added, resulting in new benefits for plant designers and owners.
One limiting factor of first-generation systems is that they were designed to use
proprietary communication technologies. Consequently, connections to third-party systems were
typically limited to custom-developed interfaces. This changed during the 1990s, and the DCS
became recognized as the optimal vehicle for integrating process data from the various
automation platforms used within a typical plant.
The open system DCS provided standard communication interfaces for connecting the
various automation subsystems. Supporting integrated plant operations for all automated plant
equipment, the DCS provided a centralized and common "single window view" of plant data for
control, logical interlock, alarm, and history. Enterprise management solutions, also enabled by
6
the open system, provided new opportunities for fleet management centers to improve operations
by remotely monitoring plant processes, analyzing unit efficiencies, and supporting coordination
between operating units.
Additionally, the use of commercial off-the-shelf technology emerged during this period
as standard Ethernet networking components and Microsoft Windows-based systems were
applied at the DCS HMI layer.
As demand for more open systems grew — along with strong interest in integrating field
bus technology and making full use of an integrated operations and engineering environment —
the third-generation DCS emerged.
" Aspect links," which are simple, menu-driven links to O&M information, can be
launched via mouse click from DCS graphical objects, alarm points, or a controller configuration
drawing (Figure 1). Aspect links of interest to plant operators may include alarm decision system
information, operational help screens, live video feeds, start-up instructions, and trends. Links of
interest to instrumentation and control personnel may include detailed troubleshooting
information such as plant piping and instrumentation drawings, equipment O&M manuals,
application guides, and smart device management tools. Links used by maintenance management
may include work orders, fault reports, or spare part inventories.
7
1. Linked up.
Improving the efficiency of plant operations and maintenance, the 800xA distributed
control system (DCS) provides aspect link technology for navigating to important plant
information from DCS client screens. Source: ABB Permissions can be configured to manage
individual views into the aspect links, thereby ensuring that system users can only view
information relative to their specific job function.
Process Optimization and Asset Optimization.
To support the goal of increased plant process efficiency, advanced control can be added
to the DCS using model predictive controller (MPC) technology. The MPC approach provides a
multi-variable algorithm that runs at a much higher frequency than earlier optimization
techniques (typically, cycle times are measured in seconds, rather than minutes). The result is an
accurate process model that can be added to base system controls to produce less variability and
smoother transitions. Less variability typically enables processes to operate closer to equipment
design limits, therefore enabling significant improvements in steam temperature, ramp rate, heat
rate, situations with complex coordinated control, and reduced emissions.
Asset optimization, now available within most third-generation DCS designs, facilitates
increased OEE and avoids unplanned shutdowns, thereby increasing plant availability. Asset
optimization can also extend the life of plant assets by using advanced predictive maintenance
techniques. For plant assets, a logical analysis function called the "asset monitor" provides 24/7
supervision of the plant device or process. Assets that can be monitored include DCS
components, communication networks, smart instrumentation, process control loops, pumps and
drives. Power plant processes such as feedwater heaters, water quality, and heat exchangers can
also be monitored. Asset monitor options can be scaled to include any number of assets, from
plant to fleet
8
By applying object-oriented technology, asset optimization is seamlessly integrated
with commercially available computerized maintenance management systems (CMMS). From
the DCS process graphics, plant maintenance staff can get an asset management view of the plant
to access work orders, spare part inventories, and maintenance activities. They can also rely upon
the DCS to identify problems and automatically generate a fault report for automated download
back into the CMMS.
Third-generation DCS controllers and I/O hardware occupy much smaller footprint than
earlier systems. DIN rail components operate using 24VDC and can be routed via redundant
fiber optic networks. This makes for a more scalable solution, as it is much easier and
economical to physically distribute clusters of remote I/O throughout the plant. DCS controller
technology has also evolved to support SIL 2 and 3 standards for safety as well as the traditional
National Fire Protection Association 85 requirements applied to many utility applications.
IEC 61850 is a recent development that is used for electrical system integration into the
plant DCS. With capabilities of integrating intelligent electrical devices (IED) for control and
asset monitoring and device management, the IEC 61850 standard is emerging with connectivity
options for protection relays, drives, medium- and high-voltage switchgear, and other equipment.
Also, specifically for power plant applications, DCS controllers can integrate field-bussed
specialty cards for turbine control (overspeed, auto synch, and valve position), vibration
condition monitoring, and flame scanners.
9
As an object is used repeatedly throughout a project, it maintains its reference
"inheritance" to the original library object. This allows for a consistent design approach for all
similar plant devices and also simplifies maintenance of control configurations when code
modifications are required. Control programming methods are available to support function
blocks from previous first- and second-generation DCS systems as well as IEC version function
blocks, ladder logic, instruction list, structured text, and sequential flow charts.
Improved Power Plant Simulators.
When used for operator training, simulator systems typically provide a substantial
opportunity to improve plant operational efficiency and expertise. Simulators can also serve as
testing grounds for verifying DCS logic changes. In earlier DCS generations, power plant
simulators offered controller hardware-based "stimulated" or PC "emulated" simulators. The
latest DCS simulator technology provides a "virtual controller" PC-based environment for
running the original equipment manufacturer (OEM) version of the controller configuration.
SIMULATION SOFTWARES
What is MATLAB
Interactions with other languages MATLAB can call functions and subroutines written in the
C programming language or Fortran. A wrapper function is created allowing MATLAB data
types to be passed and returned. The dynamically loadable object files created by compiling such
functions are termed "MEX-files" (for Mat lab Executable).
10
Libraries written in Java, ActiveX or .NET can be directly called from MATLAB and many
MATLAB libraries (for example XML or SQL support) are implemented as wrappers around
Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be done
with MATLAB extension, which is sold separately by MathWorks.
Through the MATLAB Toolbox for Maple, MATLAB commands can be called from within the
Maple Computer Algebra System, and vice versa.
PSPICE
PSpice is a SPICE analog circuit and digital logic simulation software that runs on personal
computers, hence the first letter "P" in its name. It was developed by MicroSim and is used in
electronic design automation. MicroSim was bought by OrCAD which was subsequently
purchased by Cadence Design Systems. The name is an acronym for Personal Simulation
Program with Integrated Circuit Emphasis. Today it has evolved into an analog mixed signal
simulator.
PSpice was the first version of UC Berlekey SPICE available on a PC, having been released in
January 1984 to run on the original IBM PC. This initial version ran from two 360KB floppy
disks and later included a waveform viewer and analyser program called Probe. Subsequent
versions improved in performance and moved to DEC/VAX minicomputers, Sun workstations,
the Apple Macintosh, and the Microsoft Windows platform.
PSpice, now developed towards more complex industry requirements, is integrated in the
complete systems design flow from OrCAD and Cadence Allegro. It also supports many
additional features, which were not available in the original Berkeley code like Advanced
Analysis with automatic optimization of a circuit, encryption, a Model Editor, support of
parameterized models, has several internal solvers, auto-convergence and checkpoint restart,
magnetic part editor and Tabriz core model for non-linear cores.
VLSI TECHNOLOGY
Very Large scale integration (VLSI) is the process of creating integrated circuits by
combining thousands of transistor-based circuits into a single chip. VLSI began in 1970s when
complex semiconductor and communication technologies were being developed .The
microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have
increased in complexity into the hundreds of millions of transistors.
This is the field which involves packing more and more logic devices into smaller and
smaller areas. Thanks to VLSI, circuits that would have taken boardfuls of space can now be put
into a small space few millimeters across! This has opened up a big opportunity to do things
that were not possible before. VLSI circuits are everywhere ... your computer, your car,
your brand new state-of-the-art digital camera, the cell-phones, and what have you.
All this involves a lot of expertise on many fronts within the same field.
VLSI has been around for a long time, there is nothing new about it ... but as a side effect of
advances in the world of computers, there has been a dramatic proliferation of tools that can be
used to design VLSI circuits. Alongside, obeying Moore's law, the capability of an IC has
increased exponentially over the years, in terms of computation power, utilisation of available
11
area, yield. The combined effect of these two advances is that people can now put diverse
functionality into the IC's, opening up new frontiers. Examples are embedded systems, where
intelligent devices are put inside everyday objects, and ubiquitous computing where small
computing devices proliferate to such an extent that even the shoes you wear may actually do
something useful like monitoring your heartbeats! These two fields are kind related, and getting
into their description can easily lead to another article.
The first semiconductor chips held in one transistor each. Subsequent advances added
more and more and as a consequences more individual functions or systems were integrated over
time. The first integrated circuits held only a few devices ,perhaps as many as ten diodes
,transistors, resistors and capacitors ,making it possible to fabricate one or more logic gates on a
single device. Now known retrospectively as small-scale integration (SSI),improvements in
technique led to devices with hundreds of logic gates known as ,large scale integration(LSI)i.e.
systems with atleast a thousand logic gates .Current technology has moved far past this mark
and today’s microprocessor have many millions of gates and hundreds of millions of individual
transistors.
At one time, there was an effort to name and calibrate various levels large scale
integration above VLSI. Terms like Ultra - large scale integration(ULSI) were used. But the huge
number of gates and transistors available on common device has rendered such fine distinction
moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use.
Even VLSI is now somewhat quaint, given the common assumption that all microprocessors are
VLSI or better.
NANOTECHNOLOGY
12
There has been much debate on the future of implications of nanotechnology. Nanotechnology
has the potential to create many new materials and devices with wide-ranging applications, such
as in medicine, electronics, and energy production. On the other hand, nanotechnology raises
many of the same issues as with any introduction of new technology, including concerns about
the toxicity and environmental impact of nanomaterials [1], and their potential effects on global
economics, as well as speculation about various doomsday scenarios. These concerns have led to
a debate among advocacy groups and governments on whether special regulation of
nanotechnology is warranted.
Fundamental concepts
One nanometer (nm) is one billionth, or 10-9, of a meter. By comparison, typical carbon-carbon
bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12-0.15 nm,
and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular
life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length.
To put that scale in another context, the comparative size of a nanometer to a meter is the same
as that of a marble to the size of the earth. Or another way of putting it: a nanometer is the
amount a man's beard grows in the time it takes him to raise the razor to his face.
Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and
devices are built from molecular components which assemble themselves chemically by
principles of molecular recognition. In the "top-down" approach, nano-objects are constructed
from larger entities without atomic-level control.
13