You are on page 1of 79

INDEX

1. INTRODUCTION

2. WHAT IS A NEURON?

3. THE NERVOUS SYSTEM

4. ARTIFICIAL NEURON

A simple neuron

Firing Rules

Pattern Recognition.
5. ARCHITECTURE OF NEURAL NETWORKS

Feed-forward networks

Feedback networks

Network layers

Perceptrons

6. TYPES OF LEARNING

Learning with a teacher

Learning without a teacher

1. Reinforcement learning
2. Unsupervised learning

7. MEMORY OF NEURAL NETWORKS

8. WHY IT IS DIFFICULT TO MODEL A BRAIN-LIKE


NETWORK

9. APPLICATIONS

10. FUTURE OF NEURAL NETWORKS


1. INTRODUCTION

The power and speed of modern digital computers is truly


astounding. No human can ever hope to compute a million operations
a second. However, there are some tasks for which even the most
powerful computers cannot compete with the human brain, perhaps
not even with the intelligence of an earthworm. Imagine the power of
the machine which has the abilities of both computers and humans. It
would be the most remarkable thing ever. And all humans can live
happily ever after (or will they?).

Before discussing the specifics of artificial neural nets though,


let us examine what makes real neural nets - brains - function the
way they do. Perhaps the single most important concept in neural net
research is the idea of connection strength. Neuroscience has
given us good evidence for the idea that connection strengths - that
is, how strongly one neuron influences those neurons connected to it
– are the real information holders in the brain. Learning, repetition of
a task, even exposure to a new or continuing stimulus can cause the
brain's connection strengths to change, some synaptic connections
becoming reinforced and new ones are being created, others
weakening or in some cases disappearing altogether.

The second essential element of neural connectivity is the


excitation/inhibition distinction. In human brains, each neuron is
either excitatory or inhibitory, which is to say that its activation will
either increase the firing rates of connected neurons, or decrease the
rate, respectively. The amount of excitation or inhibition produced is
of course, dependent on the connection strength - a stronger
connection means more inhibition or excitation, a weaker connection
means less.

The third important component in determining a neuron's response is


called the transfer function. Without getting into more technical
detail, the transfer function describes how a neuron's firing rate varies
with the input it receives. A very sensitive neuron may fire with very
little input, for example. A neuron may have a threshold, and fire
rarely below threshold, and vigorously above it.
Each of these behaviors can be represented mathematically, and that representation is called the transfer function. It
is often convenient to forget the transfer function, and think of the neurons as being simple addition machines,
more activity in equals more activity out. This is not really accurate though, and to develop a good understanding
of an artificial neural network, the transfer function must be taken into account. Armed with these three concepts:
Connection Strength, Inhibition/Excitation, and the Transfer Function, we can now look at how artificial neural
nets are constructed. In theory, an artificial neuron (often called a 'node') captures all the important elements of a
biological one. Nodes are connected to each other and the strength of that connection is normally given a numeric
value between -1.0 for maximum inhibition, to +1.0 for maximum excitation. All values between the two are
acceptable, with higher magnitude values indicating stronger connection strength. The transfer function in artificial
neurons whether in a computer simulation, or actual microchips wired together, is typically built right into the
nodes' design.

Neural Networks approaches this problem by trying to mimic the


structure and function of our nervous system. Many researchers
believe that AI (Artificial Intelligence) and neural networks are
completely opposite in their approach. Conventional AI is based on
the symbol system hypothesis. Loosely speaking, a symbol system
consists of indivisible entities called symbols, which can form more
complex entities, by simple rules. The hypothesis then states that
such a system is capable of and is necessary for intelligence.

The general belief is that Neural Networks is a sub-symbolic science.


Before symbols themselves are recognized, some thing must be
done so that conventional AI can then manipulate those symbols. To
make this point clear, consider symbols such as cow, grass, house
etc. Once these symbols and the "simple rules" which govern them
are known, conventional AI can perform miracles. But to discover that
something is a cow is not trivial. It can perhaps be done using
conventional AI and symbols such as - white, legs, etc. But it would
be tedious and certainly, when you see a cow, you instantly
recognize it to be so, without counting its legs. Progress in this area
can be made only by breaking this line of distinction between AI and
Neural Networks, and combining the results obtained in both, towards
a unified framework.

2. WHAT IS A NEURON?
Most neurons like the one shown here, consist of a cell body plus an
axon and many dendrites. The axon is a protuberance that delivers
the neuron’s output to connections with other neurons. Dendrites are
protuberances that provide plenty of surface area, facilitating
connection with the axons of other neurons. Dendrites often divide a
great deal, forming extremely bushy dendritic trees. Axons divide to
some extent but far less than dendrites.

A neuron does nothing unless the collective influence of all its inputs
reaches a threshold level. Whenever that threshold level is reached
the neuron produces a full-strength output in the form of a narrow
pulse that proceeds from the cell body, down the axon and into the
axon’s branches. Whenever this happens, the neuron is said to fire.
Because a neuron either fires or does nothing it is said to bean all-or-
none device.

Axons influence dendrites over narrow gaps called synapses.


Stimulation at some synapses encourages neurons to fire.
Stimulation at others discourages neurons from firing. There is
mounting evidence that learning takes place in the vicinity of
synapses and has something to do with the degree to which
synapses translate the pulse traveling down one neuron’s axon into
excitation or inhibition of the next neuron. Through those connections
electrical pulses are transmitted, and information is carried in the
timing and the frequency with which these pulses are emitted. So, our
neuron receives information from other neurons, processes it and
then relays this information to other neurons. A question, which
immediately arises, is: of what form does this processing take?
Clearly the neuron must generate some kind of output based on the
cumulative input. We still don't know the exact answer to the question
as to what happens in a biological neuron. However, we do know that
our neuron integrates the pulses that arrive and when this integration
exceeds a certain limit, our neuron in turn emits a pulse.

Finally, one more thing that you should know is that dendrites modify
the amplitude of the pulses traveling through them. This modification
varies with time, as the network `learns'. So we could assume that
when a connection (dendrite) is very strong, the importance of the
neuron from which this connection come has an important role in the
network, and on the other hand, when a connection is very narrow,
the importance of the neuron from which the connection comes from
is less high. Thus the neural network stores information in the pattern
of connection weights.

The nervous system may be viewed as a three-stage system, as


depicted in the block diagram. Control to the system in the brain,
represented by the neural (nerve) net, which continually receives
information, perceives it, and make appropriate decisions. Two sets
of arrows are shown in the figure. Those pointing from left to right
indicate the forward transmission of information-bearing signals
through the system. The arrows pointing from right to left signify the
presence of feedback in the system. The receptors convert stimuli
from the human body or the external environment into electrical
impulses that convey information to the neural net (brain). The
effectors convert electrical impulses generated by the neural net into
discernible responses as system outputs.

The number of neurons in the human brain is staggering. Current


estimates suggests there maybe of the order of 1011 neurons per
person. If the number of neurons is staggering, the number of
synapses must be toppling. In the cerebellum- that part of the brain
that is crucial to motor coordination- a single neuron may receive
inputs from as many as 105 synapses. In as much as most of the
neurons in the brain are in cerebellum, each brain has in the order of
1016 synapses.
Compared to the biological brain, a typical artificial neural network (ANN) is not likely to have more than 1,000
artificial neurons. The artificial neurons we use to built our neural networks are truly primitive in comparison to
those found in the brain .The neural networks we are presently able to design are just as primitive compared to the
local circuits and the interregional circuits in the brain. What is really satisfying, is the remarkable progress that we
have made on so many fronts during the past two decades.
3. THE NERVOUS SYSTEM

For our purpose, it will be sufficient to know that the nervous system
consists of neurons which are connected to each other in a rather
complex way. Each neuron can be thought of as a node and the
interconnections between them are edges.
Such a structure is called as a directed graph. Further, each edge
has a weight associated with it which represents how much the two
neurons which are connected by it can interact. If the weight is more,
then the two neurons can interact much more - a stronger signal can
pass through the edge.

Functioning of the Nervous System

The nature of interconnections between 2 neurons can be such that -


one neuron can either stimulate or inhibit the other. An interaction can
take place only if there is an edge between 2 neurons. If neuron A is
connected to neuron B as below with a weight w, then if A is
stimulated sufficiently, it sends a signal to B. The signal depends on
the weight w, and the nature of the signal, whether it is stimulating or
inhibiting. This depends on whether w is positive or negative. If
sufficiently strong signals are sent, B may become stimulated.
Note that A will send a signal only if it is stimulated sufficiently, that is,
if its stimulation is more than its threshold. Also if it sends a signal, it
will send it to all nodes to which it is connected. The threshold for
different neurons may be different. If many neurons send signals to A,
the combined stimulus may be more than the threshold.

Next if B is stimulated sufficiently, it may trigger a signal to all neurons to which it is connected. Depending on the
complexity of the structure, the overall functioning may be very complex but the functioning of individual neurons
is as simple as this. Because of this we may dare to try to simulate this using software or even special purpose
hardware.
4. ARTIFICIAL NEURON.

A simple neuron

An artificial neuron is a device with many inputs and one output. The
neuron has two modes of operation; the training mode and the using
mode. In the training mode, the neuron can be trained to fire (or not),
for particular input patterns. In the using mode, when a taught input
pattern is detected at the input, its associated output becomes the
current output. If the input pattern does not belong in the taught list of
input patterns, the firing rule is used to determine whether to fire or
not.

Firing Rules
The firing rule is an important concept in neural networks and
accounts for their high flexibility. A firing rule determines how one
calculates whether a neuron should fire for any input pattern. It
relates to all the input patterns, not only the ones on which the node
was trained.

A simple firing rule can be implemented by using Hamming distance


technique. The rule goes as follows:

Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set of patterns) and
others, which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to
fire if, on comparison , they have more input elements in common with the 'nearest' pattern in the 1-taught set than
with the 'nearest' pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state.
For example, a 3-input neuron is taught to output 1 when the input
(X1,X2 and X3) is 111 or 101 and to output 0 when the input is 000 or
001. Then, before applying the firing rule, the truth table is;

X1: 0 0 0 0 1 1 1 1

X2: 0 0 1 1 0 0 1 1

X3: 0 1 0 1 0 1 0 1

OUT: 0 0 0/1 0/1 0/1 1 0/1 1

As an example of the way the firing rule is applied, take the pattern
010. It differs from 000 in 1 element, from 001 in 2 elements, from
101 in 3 elements and from 111 in 2 elements. Therefore, the
'nearest' pattern is 000, which belongs, in the 0-taught set. Thus the
firing rule requires that the neuron should not fire when the input is
001. On the other hand, 011 is equally distant from two taught
patterns that have different outputs and thus the output stays
undefined (0/1). By applying the firing in every column the following
truth table is obtained;

X1: 0 0 0 0 1 1 1 1

X2: 0 0 1 1 0 0 1 1

X3: 0 1 0 1 0 1 0 1

OUT: 0 0 0 0/1 0/1 1 1 1


The difference between the two truth tables is called the
generalization of the neuron. Therefore the firing rule gives the
neuron a sense of similarity and enables it to respond 'sensibly' to
patterns not seen during training.

Pattern Recognition.

An important application of neural networks is pattern recognition.


Pattern recognition can be implemented by using a feed-forward
(figure 1) neural network that has been trained accordingly. During
training, the network is trained to associate outputs with input
patterns. When the network is used, it identifies the input pattern and
tries to output the associated output pattern. The power of neural
networks comes to life when a pattern that has no output associated
with it, is given as an input. In this case, the network gives the output
that corresponds to a taught input pattern that is least different from
the given pattern.

Figure 1.
For example:

The network of figure 1 is trained to recognise the patterns T and H.


The associated patterns are all black and all white respectively as
shown below.
If we represent black squares with 0 and white squares with 1 then
the truth tables for the 3 neurons after generalization are:

X11: 0 0 0 0 1 1 1 1

X12: 0 0 1 1 0 0 1 1

X13: 0 1 0 1 0 1 0 1

OUT: 0 0 1 1 0 0 1 1
Top neuron

X21: 0 0 0 0 1 1 1 1

X22: 0 0 1 1 0 0 1 1

X23: 0 1 0 1 0 1 0 1

OUT: 1 0/1 1 0/1 0/1 0 0/1 0

Middle neuron

X21: 0 0 0 0 1 1 1 1

X22: 0 0 1 1 0 0 1 1
X23: 0 1 0 1 0 1 0 1

OUT: 1 0 1 1 0 0 1 0

Bottom neuron

From the tables it can be seen the following associations can be


extracted:
In this case, it is obvious that the output should be all blacks since the
input pattern is almost the same as the 'T' pattern.

Here also, it is obvious that the output should be all whites since the
input pattern is almost the same as the 'H' pattern.
Here, the top row is 2 errors away from the a T and 3 from an H. So
the top output is black. The middle row is 1 error away from both T
and H so the output is random. The bottom row is 1 error away from
T and 2 away from H. Therefore the output is black. The total output
of the network is still in favor of the T shape.
5. ARCHITECTURE OF NEURAL NETWORKS.

Feed-forward networks

Feed-forward ANNs allow signals to travel one way only; from input to
output. There is no feedback (loops) i.e. the output of any layer does
not affect that same layer. Feed-forward ANNs tend to be
straightforward networks that associate inputs with outputs. They are
extensively used in pattern recognition. This type of organization is
also referred to as bottom-up or top-down.

Feedback networks

Feedback networks can have signals traveling in both directions by


introducing loops in the network. Feedback networks are very
powerful and can get extremely complicated. Feedback networks are
dynamic; their 'state' is changing continuously until they reach an
equilibrium point. They remain at the equilibrium point until the input
changes and a new equilibrium needs to be found. Feedback
architectures are also referred to as interactive or recurrent, although
the latter term is often used to denote feedback connections in single-
layer organizations.
An example of a simple feed forward network
Network layers

The commonest type of artificial neural network consists of three


groups, or layers, of units: a layer of "input" units is connected to a
layer of "hidden" units, which is connected to a layer of "output"
units. (see the figure)

-The activity of the input units represents the raw information that is
fed into the network.

-The activity of each hidden unit is determined by the activities of the


input units and the weights on the connections between the input and
the hidden units.

-The behavior of the output units depends on the activity of the


hidden units and the weights between the hidden and output units.
This simple type of network is interesting because the hidden units
are free to construct their own representations of the input. The
weights between the input and hidden units determine when each
hidden unit is active, and so by modifying these weights, a hidden
unit can choose what it represents. We also distinguish single-layer
and multi-layer architectures. The single-layer organization, in which
all units are connected to one another, constitutes the most general
case and is of more potential computational power than hierarchically
structured multi-layer organizations. In multi-layer networks, units are
often numbered by layer, instead of following a global numbering.

Perceptrons

This is a very simple model and consists of a single `trainable'


neuron. Trainable means that its threshold and input weights are
modifiable. Inputs are presented to the neuron and each input has a
desired output (determined by us). If the neuron doesn't give the
desired output, then it has made a mistake. To rectify this, its
threshold and/or input weights must be changed. How this change is
to be calculated is determined by the learning algorithm. The output
of the perceptron is constrained to Boolean values - (true, false),
(1,0), (1,-1) or whatever. This is not a limitation because if the output
of the perceptron were to be the input for something else, then the
output edge could be made to have a weight. Then the output would
be dependant on this weight.

The perceptron looks like -


x1, x2, ..., xn are inputs, these could be real numbers or Boolean
values depending on the problem.

y is the output and is Boolean.

w1, w2, ..., wn are weights of the edges and are real valued.
T is the threshold and is real valued.

The output y is 1 if the net input which is

w1*x1 + w2*x2 + …... + wn*xn

is greater than the threshold T. Otherwise the output is zero.

The idea is that we should be able to train this perceptron to respond


to certain inputs with certain desired outputs. After the training period,
it should be able to give reasonable outputs for any kind of input. If it
wasn't trained for that input, then it should try to find the best possible
output depending on how it was trained.

So during the training period we will present the perceptron with


inputs one at a time and see what output it gives. If the output is
wrong, we will tell it that it has made a mistake. It should then change
its weights and/or threshold properly to avoid making the same
mistake later.

Note that the model of the perceptron normally given is slightly


different from the one pictured here. Usually, the inputs are not
directly fed to the trainable neuron but are modified by some
"preprocessing units". These units could be arbitrarily complex,
meaning that they could modify the inputs in any way. These units
have been deliberately eliminated from our picture, because it would
be helpful to know what can be achieved by just a single trainable
neuron, without all its "powerful friends".

Units labeled A1, A2, Aj , Ap are called association units and their
task is to extract specific, localised features from the input images.
Perceptrons mimic the basic idea behind the mammalian visual
system. They were mainly used in pattern recognition even though
their capabilities extended a lot more.
6. TYPES OF LEARNING.

Learning with a teacher.


It incorporates an external teacher, so that each output unit is told
what its desired response to input signals ought to be. In conceptual
terms we may think of the teacher as having knowledge of the
environment, with that knowledge being represented by a set of input-
output examples. The environment is, however unknown to the neural
network of interest. Suppose now that the teacher and the neural
network are both exposed to a training vector (i.e. example) drawn
from the environment. By virtue of built-in knowledge the teacher is
able to provide the neural network with desired response for that
training vector. Indeed the desired response represents the optimum
action to be performed by the neural network. The network
parameters are adjusted under the combined influence of the training
vector and the error signal. The error signal is defined as the
difference between the desired responses and the actual response of
the network. This adjustment is carried out iteratively in a step-by-
step fashion with the aim of eventually making the neural network
emulate the teacher; the emulation is presumed to be optimum in
some statistical sense. In this way knowledge of the environment
available to the teacher is transferred to the neural network through
training as fully as possible. When this condition is reached, we may
then dispense with the teacher and let the neural network deal with
the environment completely by itself. This is error-correction
learning.
Learning without a teacher.

In this process there is no teacher, i.e. there are no labeled examples


of the function to be learned by the network. It is classified in two
divisions-

1. Reinforcement learning:

In this, the learning of an input-output mapping is performed through


continued interaction with the environment in order to minimise a
scalar index of performance.
In the above diagram, built around a critic that converts a primary
reinforcement signal received from the environment into a higher
quality reinforcement signal called the heuristic reinforcement signal,
both of which are scalar inputs. The system is designed to learn
under delayed reinforcement , which means that the system observes
a temporal sequence of stimuli ( i.e. state vectors) also received from
environment, which eventually result in the generation of the heuristic
reinforcement signal.

Delayed reinforcement learning is difficult to perform for 2 basic


reasons:

a) There is no teacher to provide a desired response at each


step of the learning process.

b) The delay incurred in the generation of the primary


reinforcement signal implies that learning machines must solve
a temporal credit assignment problem . By this we mean that
the learning machine must be able to assign credit and blame
individually to each action in the sequence of time steps that led
to the final outcome, while the primary reinforcement may only
evaluate the outcome.

2. Unsupervised learning:
If we want to simulate the human brain, self-organization is probably
the best way to realize this. It is known that the cortex of the human
brain is subdivided in different regions, each responsible for certain
functions. The neural cells are organizing themselves in groups,
according to incoming information. The most famous neural network
model geared towards this type of unsupervised learning is the
Kohonen network, named after its inventor, Teuvo Kohonen. This
time, unlike back-propagation networks, the inputs and outputs are
not presented at the same time to the network. Kohonen network
relies on a type of learning called competitive learning, where
neurons "compete" for the privilege of learning, and the "correct"
output is not known.

Competition is intrinsically a nonlinear operation. The important


aspect of competition is that it leads to optimization at a local level
without the need for global control to assign system resources. The
typical neural network consists of a layer of possible elements all
receiving the same input. The possible elements with the best
output(either maximum or minimum, depending on the criteria) will be
declared the winner .the concept of choosing a winner is a
fundamental issue in competitive learning. In digital computers,
choosing the best possible elements is incredibly simple (just a
search for the largest value). We wish to construct a network that will
find the largest (or smallest) output without global control. We call this
simple system a winner-take-all-network. Let us assume that we have
n inputs x1, x2, x3,………xn and we want to create an n-output network
that’ll provide only a single output in the possible elements that
corresponds to the largest input, while all the output so fall other
possible elements are set to zero:

Yk = { 1 xk largest

0 otherwise }
The figure above displays a topology we can use to implement this
network. Each possible elements has a similar non linearity (clipped-
linear region) with a self-exciting connection with a fixed weight +1,
and it is laterally connected to all the other possible elements by
negative weights. –ε (lateral inhibition) where 0< ε <1/N. We assume
that all inputs are positive, the slope of possible element is 1, and the
initial condition for no input is 0 output. The input is presented as an
initial condition to the network i.e., it is presented for one sample and
then pulled back. The lateral inhibition drives all the outputs towards
zero at an exponential rate as all the smaller possible elements
approach 0, the largest possible element (PE k) will be less and less
affected by the lateral inhibition. At this time the self–excitation drives
its output high which enforces the 0 values of the other possible
elements. Hence this solution is stable. Note that the possible
elements compete for the output activity, and only one wins it.

In this network topology, there is a feedback among the possible


elements. Here the output takes some time to stabilize, unlike the
feedback networks, which are instantaneous (i.e. a pattern is
presented, and a stable output is obtained). Another difference is that
weights are fixed before hand, so this topology does not need
training.
Here only one output is active at a time, which means that an output
is singled out from the others according to same principle (here, the
largest value). The amplitude difference at the input could be small,
but at the output it is very clear, so the network amplifies the
difference, creating a selector mechanism that can be applied to
many different problems. A typical application of the winner-takes-all-
network is at the output of another network (such as LAN) to select
the highest output. Thus the net makes a decision based on the most
probable answer.
7. MEMORY OF NEURAL NETWORKS

In a neurobiological context, memory refers to the relatively enduring


neural alternative induced by the interaction of an organization with
its environment. Furthermore, for the memory to be useful, it must be
accessible to the nervous system, in order to influence future
behavior. However, an activity pattern must initially be stored in
memory through a learning process.
Memory and learning are intricately connected. An associative
memory is that offers the following characteristics:

1) The memory is distributed.

2) Both the stimulus (key) pattern and the response (stored)


pattern of an associative memory consist of data vectors.

3) Information is stored in memory by setting up a spatial


pattern of neural activities across a large number of neurons.

4) Information contained in a stimulus not only determine its


storage location in memory but also an address for its retrieval.

5) Although neurons do not represent reliable and low noise


computing cells, the memory inhibits a high degree of
resistance to noise and damage of a diffusive

6) There may be interactions between individual patterns


stored in memory (otherwise the memory would have to be
exceptionally large for it to accommodate and store a large
number of patterns in perfect isolation from each other). There
is therefore, the distinct possibility for the memory to make
errors during the recall process.
8. Why it is Difficult to Model a Brain-like Neural Network

We have seen that the functioning of individual neurons is quite


simple. Then why is it difficult to achieve our goal of combining the
abilities of computers and humans?

The difficulty arises because of the following –


1) It is difficult to find out which neurons should be connected to
which. This is the problem of determining the neural network
structure. Further, the interconnections in the brain are constantly
changing. The initial interconnections seem to be largely governed by
genetic factors.

2) The weights on the edges and thresholds in the nodes are


constantly changing. This problem has been the subject of much
research and has been solved to a large extent. The approach has
been as follows -

Given some input, if the neural network makes an error, then it can
be determined exactly which neurons were active before the error.
Then we can change the weights and thresholds appropriately to
reduce this error.
For this approach to work, the neural network must "know" that it has made a mistake. In real life, the mistake
usually becomes obvious only after a long time. This situation is more difficult to handle since we may not know
which input led to the error.
Also notice that this problem can be considered as a generalization of
the previous problem of determining the neural network structure. If
this is solved, that is also solved. This is because if the weight
between two neurons is zero then, it is as good as the two neurons
not being connected at all. So if we can figure out the weights
properly, then the structure becomes known. But there may be better
methods of determining the structure.

3) The functioning of individual neurons may not be so simple


after all. For example, remember that if a neuron receives signals
from many neighboring neurons, the combined stimulus may exceed
its threshold. Actually, the neuron need not receive all signals at
exactly the same time, but must receive them all in a short time-
interval.

It is usually assumed that such details will not affect the functioning of
the simulated neural network much. But may be it will.
Another example of deviation from normal functioning is that some
edges can transmit signals in both directions. Actually, all edges can
transmit in both directions, but usually they transmit in only 1
direction, from one neuron to another.
9. APPLICATIONS

Neural networks cannot do anything that cannot be done using


traditional computing techniques, BUT they can do some things,
which would otherwise be very difficult. In particular, they can form a
model from their training data (or possibly input data) alone. This is
particularly useful with sensory data, or with data from a complex
(e.g. chemical, manufacturing, or commercial) process. There may be
an algorithm, but it is not known, or it has too many variables. It is
easier to let the network learn from examples.
Neural networks are being used:

Neural networks in medicine

Artificial Neural Networks (ANN) are currently a 'hot' research area in


medicine, mostly on modelling parts of the human body and
recognising diseases from various scans (e.g. cardiograms, CAT
scans, ultrasonic scans, etc.).
Neural networks are ideal in recognising diseases using scans. What
is needed is a set of examples that are representative of all the
variations of the disease.

Modelling and Diagnosing the Cardiovascular System:

Neural Networks are used experimentally to model the human


cardiovascular system. Diagnosis can be achieved by building a
model of the cardiovascular system of an individual and comparing it
with the real time physiological measurements taken from the patient.
If this routine is carried out regularly, potential harmful medical
conditions can be detected at an early stage and thus make the
process of combating the disease much easier.

Neural Networks in business

Business is a diverted field with several general areas of


specialization such as accounting or financial analysis. Almost any
neural network application would fit into one business area or
financial analysis.
There is some potential for using neural networks for business
purposes, including resource allocation and scheduling. There is also
a strong potential for using neural networks for database mining, that
is, searching for patterns implicit within the explicitly stored
information in databases.

in marketing:

Networks have been used to improve marketing mailshots. One


technique is to run a test mailshot, and look at the pattern of returns
from this. The idea is to find a predictive mapping from the data
known about the clients to how they have responded. This mapping is
then used to direct further mailshots

in investment analysis:

To attempt to predict the movement of stocks currencies etc., from


previous data. There, they are replacing earlier simpler linear models.

in signature analysis:

As a mechanism for comparing signatures made (e.g. in a bank) with


those stored. This is one of the first large-scale applications of neural
networks in the USA, and is also one of the first to use a neural
network chip.

in process control:

There are clearly applications to be made here: most processes


cannot be determined as computable algorithms. Newcastle
University Chemical Engineering Department is working with
industrial partners (such as Zeneca and BP) in this area.

in monitoring:

Networks have been used to monitor

• the state of aircraft engines. By monitoring vibration


levels and sound, early warning of engine problems can be
given.

• British Rail has also been testing a similar application


monitoring diesel engines.

10. FUTURE OF NEURAL NETWORKS

A great deal of research is going on in neural networks worldwide.


This ranges from basic research into new and more efficient learning
algorithms, to networks which can respond to temporally varying
patterns (both ongoing at Stirling), to techniques for implementing
neural networks directly in silicon. Already one chip commercially
available exists, but it does not include adaptation. Edinburgh
University have implemented a neural network chip, and are working
on the learning problem. Production of a learning chip would allow the
application of this technology to a whole range of problems where the
price of a PC and software cannot be justified. There is particular
interest in sensory and sensing applications: nets which learn to
interpret real-world sensors and learn about their environment.

New Application areas:

Pen PC's

PC's where one can write on a tablet, and the writing will be recognized
and translated into (ASCII) text.

Speech and Vision recognition systems

Not new, but Neural Networks are becoming increasingly part of


such systems. They are used as a system component, in
conjunction with traditional computers.

White goods and toys

As Neural Network chips become available, the possibility of


simple cheap systems which have learned to recognize simple
entities (e.g. walls looming, or simple commands like Go, or Stop),
may lead to their incorporation in toys and washing machines etc.
Already the Japanese are using a related technology, fuzzy logic, in
this way. There is considerable interest in the combination of fuzzy
and neural technologies.
BIBLIOGRAPHY

BOOKS

1. NEURAL NETWORKS

- Simon Haykin
2. ARTIFICIAL INTELLIGENCE

- Patrick Henry Winston

3. INTRODUCTION TO ARTIFICIAL NEURAL NETWROKS

- Jacek M Zurada

4. NEURAL AND ADAPTIVE SYSTEMS

- Jose’ C Principe, Neil R Euliano &

W Curt Lefebre

5. ARTIFICIAL INTELLIGENCE

- Nils J Nilsson

6. SIMULATING NEURAL NETWORKS

- James A Freeman
WEB SITES

http://hem.hj.se/~de96klda/NeuralNetworks.htm

http://www.cs.bgu.ac.il/~omri/Perceptron/

http://www.cs.cmu.edu/Groups/AI/html/faqs/ai/neural/faq.html

http://vv.carleton.ca/~neil/neural/neuron.html
http://www.uib.no/People/hhioh/nn/

ftp://ftp.sas.com/pub/neural/FAQ.html

news:comp.ai.neural-nets

You might also like