You are on page 1of 82

Thesis

On
Detection of Sinkhole attack
In
Wireless Sensor Network
Submitted in partial fulfillment of the requirements
For the award of the degree of
Master of Technology
In
Computer Science & Engineering
Submitted by
Monika kamra
Under the supervision of
Mrs.Seema Kharb
(Assistant Professor)

Bhagwan Mahavir Institute Of Engineering and Technology


Fazilpur, Sonipat

Affiliated to DCRUST University, Sonipat

ABSTRACT

In a wireless sensor network, multiple nodes would send sensing element readings to a base
station for additional process. it's well-known that such a many-to-one communication is very at
risk of the sink attack, wherever associate degree unwelcome person attracts close nodes with
unfaithful routing data, then performs selective forwarding or alters the information passing
through it. A sink attack forms a significant threat to sensor networks, significantly considering
that such networks area unit typically deployed in open areas and of weak computation and
battery power. During this paper, we tend to gift a unique algorithmic rule for police
investigation the unwelcome person during a sink attack. The algorithmic rule initial finds a
listing of suspected nodes, and then effectively identifies the unwelcome person within the list
through a network flow graph. The algorithmic rule is additionally strong to modify cooperative
malicious nodes that conceive to hide the important unwelcome person. Weve evaluated the
performance of the planned algorithmic rule through each numerical analysis and simulations
that confirmed the effectiveness and accuracy of the algorithmic rule. Our results additionally
recommend that its communication and computation overheads area unit moderately low for
wireless sensor networks.

TABLE OF CONTENT
CHAPTER 1: INTRODUCTION
CHAPTER 2: LITRATURE REVIEW
CHAPTER 3: RESEARCH METHODOLOGY
3.1 Research Design
3.2 Research Objective
CHAPTER 6: RESULT AND DISCUSSION

CHAPTER 7: FUTURE WORK AND CONCLUSION


REFERENCES

CHAPTER 1

INTRODUCTION

Wireless sensor networks become increasingly popular to solve such challenging real-world
problems as industrial sensing and environmental monitoring. A sensor network generally
consists of a set of sensor nodes, which continuously monitor their surroundings and forward the
sensing data to a sink node, or base station. It is well-known that such a many-to-one

communication is highly vulnerable to the sinkhole attack, where an intruder attracts surrounding
nodes with unfaithful routing information, and then alters the data passing through it or performs
selective forwarding. A sinkhole attack prevents the base station from obtaining complete and
correct sensing data, and thus forms a serious threat to higher-layer applications. It is particularly
severe for wireless sensor networks given the vulnerability of wireless links, and that the sensors
are often deployed in open areas and of weak computation and battery power. Although some
secure or geographic based routing protocols resist to the sinkhole attacks in certain level, many
current routing protocols in sensor networks are susceptible to the sinkhole attack.
The performance of the proposed algorithm is evaluated through both numerical analysis and
simulations, which confirmed the effectiveness and accuracy of the algorithm. Our results also
suggest that its communication and computation overheads are reasonably low for wireless
sensor networks.
The power of wireless detector networks lies within the ability to deploy massive numbers of
tiny nodes that assemble and piece themselves. Usage eventualities for these devices range from
period following, to observance of environmental conditions, to omnipresent computing
environments, to in place observance of the health of structures or instrumentality.
While usually observed as wireless detector networks, they will conjointly management actuators
that extend management from Net into the physical world.

The most easy application of wireless detector network technology is to monitor remote
environments for low frequency knowledge trends. for instance, a chemical plant may well be
simply monitored for leaks by many sensors that mechanically type a wireless interconnection
network and straight off report the detection of any chemical leaks. not like ancient wired

systems, preparation prices would be lowest. In addition to drastically reducing the installation
prices, wireless detector networks have the flexibility to dynamically adapt to dynamic
environments. Adaptation mechanisms will answer changes in network topologies or will cause
the network to shift between drastically totally different modes of operation. For instance,
identical embedded network acting leak observance in an exceedingly chemical works can be
reconfigured into a network designed to localize the supply of a leak and track the diffusion of
toxic gases. The network might then direct employees to the safest path for emergency
evacuation.
Unlike ancient wireless devices, wireless detector nodes don't ought to communicate directly
with the closest dynamic tower or base station, but only with their native peers. Instead, of
looking forward to a pre-deployed infrastructure, each individual detector or mechanism
becomes a part of the infrastructure. Peer-to-peer networking protocols offer a mesh-like
interconnect to shuttle knowledge between the thousands of small embedded devices in an
exceedingly multi-hop fashion. The versatile mesh architectures pictured dynamically adapt to
support introduction of recent nodes or expand to hide a bigger geographical area. in addition,
the system will mechanically adapt to atone for node failures.
The vision of mesh networking relies on strength in numbers. not like cell phone systems that
deny service once too several phones are active in an exceedingly tiny space, the interconnection
of a wireless detector network solely grows stronger as nodes are value-added. As long as theres
sample density, one network of nodes will grow to hide limitless area.
The concept of wireless sensor networks is based on a simple equation:
Sensing + CPU + Radio = Thousands of potential applications As soon as people understand the
capabilities of a wireless sensor network, hundreds of applications spring to mind. It seems like a

straightforward combination of modern technology. However, actually combining sensors,


radios, and CPUs into an effective wireless sensor network requires a detailed understanding of
the both capabilities and limitations of each of the underlying hardware components, as well as a
detailed understanding of modern networking technologies and distributed systems theory. Each
individual node must be designed to provide the set of primitives necessary to synthesize the
interconnected web that will emerge as they are deployed, while meeting strict requirements of
size, cost and power consumption. A core challenge is to map the overall system requirements
down to individual device capabilities, requirements and actions. To make the wireless sensor
network vision a reality, architecture must be developed that synthesizes the envisioned
applications out of the underlying hardware capabilities. To develop this system architecture we
work from the high level application requirements down through the low-level hardware
requirements. In this process we first attempt to understand the set of target applications. To limit
the number of applications that we must consider, we focus on a set of application classes that
we believe are representative of a large fraction of the potential usage scenarios. We use this set
of application classes to explore the system-level requirements that are placed on the overall
architecture. From these system-level requirements we can then drill down into the individual
node-level requirements. Additionally, we must provide a detailed background into the
capabilities of modern hardware. After we present the raw hardware capabilities, we present a
basic wireless sensor node.
1.1 Sensor network application classes
The three application classes we have selected are: environmental data collection, security
monitoring, and sensor node tracking. We believe that the majority of wireless sensor network
deployments will fall into one of these class templates.

1.1.1

Environmental Data Collection

A canonical environmental data collection application is one where a research scientist wants to
collect several sensor readings from a set of points in an environment over a period of time in
order to detect trends and interdependencies. This scientist would want to collect data from
hundreds of points spread throughout the area and then analyze the data offline. The scientist
would be interested in collecting data over several months or years in order to look for long-term
and seasonal trends. For the data to be meaningful it would have to be collected at regular
intervals and the nodes would remain at known locations. At the network level, the
environmental data collection application is characterized by having a large number of nodes
continually sensing and transmitting data back to a set of base stations that store the data using
traditional methods. These networks generally require very low data rates and extremely long
lifetimes. In typical usage scenario, the nodes will be evenly distributed over an outdoor
environment. This distance between adjacent nodes will be minimal yet the distance across the
entire network will be significant. After deployment, the nodes must first discover the topology
of the network and estimate optimal routing strategies. The routing strategy can then be used to
route data to a central collection points. In environmental monitoring applications, it is not
essential that the nodes develop the optimal routing strategies on their own. Instead, it may be
possible to calculate the optimal routing topology outside of the network and then communicate
the necessary information to the nodes as required. This is possible because the physical

topology of the network is relatively constant. While the time variant nature of RF
communication may cause connectivity between two nodes to be intermittent, the overall
topology of the network will be relatively stable. Environmental data collection applications
typically use tree-based routing topologies where each routing tree is rooted at high-capability
nodes that sink data. Data is periodically transmitted from child node to parent node up the treestructure until it reaches the sink. With tree-based data collection each node is responsible for
forwarding the data of all its descendants. Nodes with a large number of descendants transmit
significantly more data than leaf nodes. These nodes can quickly become energy bottlenecks.
Once the network is configured, each node periodically samples its sensors and transmits its data
up the routing tree and back to the base station. For many scenarios, the 13 interval between
these transmissions can be on the order of minutes. Typical reporting periods are expected to be
between 1 and 15 minutes; while it is possible for networks to have significantly higher reporting
rates. The typical environment parameters being monitored, such as temperature, light intensity,
and humidity, do not change quickly enough to require higher reporting rates. In addition to large
sample intervals, environmental monitoring applications do not have strict latency requirements.
Data samples can be delayed inside the network for moderate periods of time without
significantly affecting application performance. In general the data is collected for future
analysis, not for real-time operation. In order to meet lifetime requirements, each communication
event must be precisely scheduled. The senor nodes will remain dormant a majority of the time;
they will only wake to transmit or receive data. If the precise schedule is not met, the
communication events will fail. As the network ages, it is expected that nodes will fail over time.
Periodically the network will have to reconfigure to handle node/link failure or to redistribute
network load. Additionally, as the researchers learn more about the environment they study, they

may want to go in and insert additional sensing points. In both cases, the reconfigurations are
relatively infrequent and will not represent a significant amount of the overall system energy
usage. The most important characteristics of the environmental monitoring requirements are long
lifetime, precise synchronization, low data rates and relatively static topologies. Additionally it is
not essential that the data be transmitted in real-time back to the central 14 collection point. The
data transmissions can be delayed inside the network as necessary in order to improve network
efficiency.

1.1.2

Security Monitoring

Our second class of sensor network application is security monitoring. Security monitoring
networks are composed of nodes that are placed at fixed locations throughout an environment
that continually monitor one or more sensors to detect an anomaly. A key difference between
security monitoring and environmental monitoring is that security networks are not actually
collecting any data. This has a significant impact on the optimal network architecture. Each node
has to frequently check the status of its sensors but it only has to transmit a data report when
there is a security violation. The immediate and reliable communication of alarm messages is the
primary system requirement. These are report by exception networks. Additionally, it is
essential that it is confirmed that each node is still present and functioning. If a node were to be
disabled or fail, it would represent a security violation that should be reported. For security
monitoring applications, the network must be configured so that nodes are responsible for
confirming the status of each other. One approach is to have each node be assigned to peer that
will report if a node is not functioning. The optimal topology of a security monitoring network
will look quite different from that of a data collection network. In a collection tree, each node
must transmit the data of all of its decedents. Because of this, it is optimal to have a short, wide
tree. In contrast, with a security network the optimal configuration would be to have a linear
topology that forms a Hamiltonian cycle of the network. The power consumption of each node is
only 15 proportional to the number of children it has. In a linear network, each node would have

only one child. This would evenly distribute the energy consumption of the network. The
accepted norm for security systems today is that each sensor should be checked approximately
once per hour. Combined with the ability to evenly distribute the load of checking nodes, the
energy cost of performing this check becomes minimal. A majority of the energy consumption in
a security network is spent on meeting the strict latency requirements associated with the
signaling the alarm when a security violation occurs. Once detected, a security violation must be
communicated to the base station immediately. The latency of the data communication across the
network to the base station has a critical impact on application performance. Users demand that
alarm situations be reported within seconds of detection. This means that network nodes must be
able to respond quickly to requests from their neighbors to forward data. In security networks
reducing the latency of an alarm transmission is significantly more important than reducing the
energy cost of the transmissions. This is because alarm events are expected to be rare. In a fire
security system alarms would almost never be signaled. In the event that one does occur a
significant amount of energy could be dedicated to the transmission. Reducing the transmission
latency leads to higher energy consumption because routing nodes must monitor the radio
channel more frequently. In security networks, a vast majority of the energy will be spend on
confirming the functionality of neighboring nodes and in being prepared to instantly forward
alarm announcements. Actual data transmission will consume a small fraction of the network
energy.

1.1.3

Node tracking scenarios

A third usage scenario commonly discussed for sensor networks is the tracking of a tagged
object through a region of space monitored by a sensor network. There are many situations
where one would like to track the location of valuable assets or personnel. Current inventory
control systems attempt to track objects by recording the last checkpoint that an object passed
through. However, with these systems it is not possible to determine the current location of an
object. For example, UPS tracks every shipment by scanning it with a barcode whenever it
passes through a routing center. The system breaks down when objects do not flow from
checkpoint to checkpoint. In typical work environments it is impractical to expect objects to be
continually passed through checkpoints. With wireless sensor networks, objects can be tracked
by simply tagging them with a small sensor node. The sensor node will be tracked as it moves
through a field of sensor nodes that are deployed in the environment at known locations. Instead
of sensing environmental data, these nodes will be deployed to sense the RF messages of the
nodes attached to various objects. The nodes can be used as active tags that announce the
presence of a device. A database can be used to record the location of tracked objects relative to
the set of nodes at known locations. With this system, it becomes possible to ask where an object
is currently, not simply where it was last scanned. Unlike sensing or security networks, node
tracking applications will continually have topology changes as nodes move through the
network. While the connectivity 17 between the nodes at fixed locations will remain relatively

stable, the connectivity to mobile nodes will be continually changing. Additionally the set of
nodes being tracked will continually change as objects enter and leave the system. It is essential
that the network be able to efficiently detect the presence of new nodes that enter the network.

1.1.4

Hybrid networks

In general, complete application scenarios contain aspects of all three categories. For example,
in a network designed to track vehicles that pass through it, the network may switch between
being an alarm monitoring network and a data collection network. During the long periods of
inactivity when no vehicles are present, the network will simply perform an alarm monitoring
function. Each node will monitor its sensors waiting to detect a vehicle. Once an alarm event is
detected, all or part of the network, will switch into a data collection network and periodically
report sensor readings up to a base station that track the vehicles progress. Because of this multimodal network behavior, it is important to develop a single architecture that and handle all three
of these application scenarios.

1.2 System Evaluation Metrics


Now that we have established the set of application scenarios that we are addressing, we explore
the evaluation metrics that will be used to evaluate a wireless sensor network. To do this we keep
in mind the high-level objectives of the network deployment, the intended usage of the network,
and the key advantages of wireless sensor networks over existing technologies. The key
evaluation metrics for wireless sensor 18 networks are lifetime, coverage, cost and ease of
deployment, response time, temporal accuracy, security, and effective sample rate. Their
importance is discussed below. One result is that many of these evaluation metrics are
interrelated. Often it may be necessary to decrease performance in one metric, such as sample
rate, in order to increase another, such as lifetime. Taken together, this set of metrics form a
multidimensional space that can be used to describe the capabilities of a wireless sensor network.
The capabilities of a platform are represented by a volume in this multidimensional space that
contains all of the valid operating points. In turn, a specific application deployment is
represented by a single point. A system platform can successfully perform the application if and
only if the application requirements point lies inside the capability hyperspace. One goal of this
chapter is to present an understanding of the tradeoffs that link each axis of this space and an
understanding of current capabilities. The architectural improvements and optimizations we
present in later chapters are then motivated by increasing the ability to deliver these capabilities
and increasing the volume of the capability hypercube.

1.2.1

Lifetime

Critical to any wireless sensor network deployment is the expected lifetime. The goal of both the
environmental monitoring and security application scenarios is to have nodes placed out in the
field, unattended, for months or years. The primary limiting factor for the lifetime of a sensor
network is the energy supply. Each node must be designed to manage its local supply of energy
in order to maximize total network lifetime. In many deployments it is not the average node
lifetime that is important, but rather the minimum node lifetime. In the case of wireless security
systems, every node must last for multiple years. A single node failure would create vulnerability
in the security systems. In some situations it may be possible to exploit external power, perhaps
by tapping into building power with some or all nodes. However, one of the major benefits to
wireless systems is the ease of installation. Requiring power to be supplied externally to all
nodes largely negates this advantage. A compromise is to have a handful of special nodes that are
wired into the buildings power infrastructure. In most application scenarios, a majority of the
nodes will have to be self-powered. They will either have to contain enough stored energy to last
for years, or they will have to be able to scavenge energy from the environment through devices,
such as solar cells or piezoelectric generators. Both of these options demand that that the average
energy consumption of the nodes be as low as possible. The most significant factor in
determining lifetime of a given energy supply is radio power consumption. In a wireless sensor
node the radio consumes a vast majority of the system energy. This power consumption can be

reduced through decreasing the transmission output power or through decreasing the radio duty
cycle. Both of these alternatives involve sacrificing other system metrics.

1.2.2

Coverage

Next to lifetime, coverage is the primary evaluation metric for a wireless network. It is always
advantageous to have the ability to deploy a network over a larger physical area. This can
significantly increase a systems value to the end user. It is important to keep in mind that the
coverage of the network is not equal to the range of the wireless 20 communication links being
used. Multi-hop communication techniques can extend the coverage of the network well beyond
the range of the radio technology alone. In theory they have the ability to extend network range
indefinitely. However, for a given transmission range, multi-hop networking protocols increase
the power consumption of the nodes, which may decrease the network lifetime. Additionally,
they require a minimal node density, which may increase the deployment cost. Tied to range is a
networks ability to scale to a large number of nodes. Scalability is a key component of the
wireless sensor network value proposition. A user can deploy a small trial network at first and
then can continually add sense points to collect more and different information. A user must be
confident that the network technology being used is capable of scaling to meet his eventual need.
Increasing the number of nodes in the system will impact either the lifetime or effective sample
rate. More sensing points will cause more data to be transmitted which will increase the power
consumption of the network. This can be offset by sampling less often.

1.2.3

Cost and ease of deployment

A key advantage of wireless sensor networks is their ease of deployment. Biologists and
construction workers installing networks cannot be expected to understand the underlying
networking and communication mechanisms at work inside the wireless network. For system
deployments to be successful, the wireless sensor network must configure itself. It must be
possible for nodes to be placed throughout the environment by an untrained person and have the
system simply work. Ideally, the system would automatically configure itself for any possible
physical node placement. However, real systems must place constraints on actual node 21
placements it is not possible to have nodes with infinite range. The wireless sensor network
must be capable of providing feedback as to when these constraints are violated. The network
should be able to assess quality of the network deployment and indicate any potential problems.
This translates to requiring that each device be capable of performing link discovery and
determining link quality. In addition to an initial configuration phase, the system must also adapt
to changing environmental conditions. Throughout the lifetime of a deployment, nodes may be
relocated or large physical objects may be placed so that they interfere with the communication
between two nodes. The network should be able to automatically reconfigure on demand in order
to tolerate these occurrences. The initial deployment and configuration is only the first step in the
network lifecycle. In the long term, the total cost of ownership for a system may have more to do
with the maintenance cost than the initial deployment cost. The security application scenario in

particular requires that the system be extremely robust. In addition to extensive hardware and
software testing prior to deployment, the sensor system must be constructed so that it is capable
of performing continual self-maintenance. When necessary, it should also be able to generate
requests when external maintenance is required. In a real deployment, a fraction of the total
energy budget must be dedicated to system maintenance and verification. The generation of
diagnostic and reconfiguration traffic reduces the network lifetime. It can also decrease the
effective sample rate.

1.2.4

Response Time

Particularly in our alarm application scenario, system response time is a critical performance
metric. An alarm must be signaled immediately when an intrusion is detected. Despite low power
operation, nodes must be capable of having immediate, high-priority messages communicated
across the network as quickly as possible. While these events will be infrequent, they may occur
at any time without notice. Response time is also critical when environmental monitoring is used
to control factory machines and equipment. Many users envision wireless sensor networks as
useful tools for industrial process control. These systems would only be practical if response time
guarantees could be met. The ability to have low response time conflicts with many of the
techniques used to increase network lifetime. Network lifetime can be increased by having nodes
only operate their radios for brief periods of time. If a node only turns on its radio once per
minute to transmit and receive data, it would be impossible to meet the application requirements
for response time of a security system. Response time can be improved by including nodes that
are powered all the time. These nodes can listen for the alarm messages and forward them down
a routing backbone when necessary. This, however, reduces the ease of deployment for the
system.

1.2.5

Temporal Accuracy

In environmental and tracking applications, samples from multiple nodes must be crosscorrelated in time in order to determine the nature of phenomenon being measured. The
necessary accuracy of this correlation mechanism will depend on the rate of 23 propagation of
the phenomenon being measured. In the case of determining the average temperature of a
building, samples must only be correlated to within seconds. However, to determine how a
building reacts to a seismic event, millisecond accuracy is required. To achieve temporal
accuracy, a network must be capable of constructing and maintaining a global time base that can
be used to chronologically order samples and events. In a distributed system, energy must be
expended to maintain this distributed clock. Time synchronization information must be
continually communicated between nodes. The frequency of the synchronization messages is
dependent on the desired accuracy of the time clock. The bottom line is maintenance of a
distributed time base requires both power and bandwidth.

1.2.6

Security

Despite the seemingly harmless nature of simple temperature and light information from an
environmental monitoring application, keeping this information secure can be extremely
important. Significant patterns of building use and activity can be easily extracted from a trace of
temperature and light activity in an office building. In the wrong hands, this information can be
exploited to plan a strategic or physical attack on a company. Wireless sensor networks must be
capable of keeping the information they are collecting private from eavesdropping. As we
consider security oriented applications, data security becomes even more significant. Not only
must the system maintain privacy, it must also be able to authenticate data communication. It
should not be possible to introduce a false alarm message or to replay an old alarm message as a
current one. A combination of privacy 24 and authentication is required to address the needs of
all three scenarios. Additionally, it should not be possible to prevent proper operation by
interfering with transmitted signals. Use of encryption and cryptographic authentication costs
both power and network bandwidth. Extra computation must be performed to encrypt and
decrypt data and extra authentication bits must be transmitted with each packet. This impacts
application performance by decreasing the number of samples than can be extracted from a given
network and the expected network lifetime. 2.2.7 Effective Sample Rate In a data collection
network, effective sample rate is a primary application performance metric. We define the
effective sample rate as the sample rate that sensor data can be taken at each individual sensor

and communicated to a collection point in a data collection network. Fortunately, environmental


data collection applications typically only demand sampling rates of 1-2 samples per minute.
However, in addition to the sample rate of a single sensor, we must also consider the impact of
the multi-hop networking architectures on a nodes ability to effectively relay the data of
surrounding nodes. In a data collection tree, a node must handle the data of all of its descendents.
If each child transmits a single sensor reading and a node has a total of 60 descendants, then it
will be forced to transmit 60 times as much data. Additionally, it must be capable of receiving
those 60 readings in a single sample period. This multiplicative increase in data communication
has a significant effect on system requirements. Network bit rates combined with maximum
network size end up impacting the effective per-node sample rate of the complete system.One
mechanism for increasing the effective sample rate beyond the raw communication capabilities
of the network is to exploit in-network processing. Various forms of spatial and temporal
compression can be used to reduce the communication bandwidth required while maintaining the
same effective sampling rate. Additionally local storage can be used to collect and store data at a
high sample rate for short periods of time. In-network data processing can be used to determine
when an interesting event has occurred and automatically trigger data storage. The data can
then be downloaded over the multi-hop network as bandwidth allows. Triggering is the simplest
form of in-network processing. It is commonly used in security systems. Effectively, each
individual sensor is sampled continuously, processed, and only when a security breach has
occurred is data transmitted to the base station. If there were no local computation, a continuous
stream of redundant sensor readings would have to be transmitted. We show how this same
process can be extended to complex detection events.

1.3 Individual node evaluation metrics


Now that we have established the set of metrics that will be used to evaluate the performance of
the sensor network as a whole, we can attempt to link the system performance metrics down to
the individual node characteristics that support them. The end goal is to understand how changes
to the low-level system architecture impact application performance. Just as application metrics
are often interrelated, we will see that an improvement in one node-level evaluation metric (e.g.,
range) often comes at the expense of another (e.g., power).

1.3.1

Power

To meet the multi-year application requirements individual sensor nodes must be incredibly lowpower. Unlike cell phones, with average power consumption measured in hundreds of milliamps
and multi-day lifetimes, the average power consumption of wireless sensor network nodes must
be measured in micro amps. This ultra-low-power operation can only be achieved by combining
both low-power hardware components and low duty-cycle operation techniques. During active
operation, radio communication will constitute a significant fraction of the nodes total energy
budget. Algorithms and protocols must be developed to reduce radio activity whenever possible.
This can be achieved by using localized computation to reduce the streams of data being
generated by sensors and through application specific protocols. For example, events from
multiple sensor nodes can be combined together by a local group of nodes before transmitting a
single result across the sensor network. Our discussion on available energy sources will show
that a node must consume less that 200 uA on average to last for one year on a pair of AA
batteries. In contrast the average power consumption of a cell phone is typically more than 4000
uA, a 20 fold difference.

1.3.2

Flexibility

The wide range of usage scenarios being considered means that the node architecture must be
flexible and adaptive. Each application scenario will demand a slightly different mix of lifetime,
sample rate, response time and in-network processing. A wireless sensor network architecture
must be flexible enough to accommodate a wide 27 range of application behaviors. Additionally,
for cost reasons each device will have only the hardware and software it actually needs for a
given the application. The architecture must make it easy to assemble just the right set of
software and hardware components. Thus, these devices require an unusual degree of hardware
and software modularity while simultaneously maintaining efficiency.

1.3.3

Robustness

In order to support the lifetime requirements demanded, each node must be constructed to be as
robust as possible. In a typical deployment, hundreds of nodes will have to work in harmony for
years. To achieve this, the system must be constructed so that it can tolerate and adapt to
individual node failure. Additionally, each node must be designed to be as robust as possible.
System modularity is a powerful tool that can be used to develop a robust system. By dividing
system functionality into isolated sub-pieces, each function can be fully tested in isolation prior
to combining them into a complete application. To facilitate this, system components should be
as independent as possible and have interfaces that are narrow, in order to prevent unexpected
interactions. In addition to increasing the systems robustness to node failure, a wireless sensor
network must also be robust to external interference. As these networks will often coexist with
other wireless systems, they need the ability to adapt their behavior accordingly. The robustness
of wireless links to external interference can be greatly increased through the use of multichannel and spread spectrum radios. It is common for facilities to have existing wireless devices
that operate on one or more frequencies. The 28 ability to avoid congested frequencies is
essential in order to guarantee a successful deployment.

1.3.4

Security

In order to meet the application level security requirements, the individual nodes must be capable
of performing complex encrypting and authentication algorithms. Wireless data communication
is easily susceptible to interception. The only way to keep data carried by these networks private
and authentic is to encrypt all data transmissions. The CPU must be capable of performing the
required cryptographic operations itself or with the help of included cryptographic accelerators.
In addition to securing all data transmission, the nodes themselves must secure the data that they
contain. While they will not have large amounts of application data stored internally, they will
have to store secret encryption keys used in the network. If these keys are revealed, the security
of the network could crumble. To provide true security, it must be difficult to extract the
encryption keys of from any node.

1.3.5

Communication

A key evaluation metric for any wireless sensor network is its communication rate, power
consumption, and range. While we have made the argument that the coverage of the network is
not limited by the transmission range of the individual nodes, the transmission range does have a
significant impact on the minimal acceptable node density. If nodes are placed too far apart it
may not be possible to create an interconnected network or one with enough redundancy to
maintain a high level of reliability. Most application scenarios have natural node densities that
correspond to the 29 granularity of sensing that is desired. If the radio communications range
demands a higher node density, additional nodes must be added to the system in to increase node
density to a tolerable level. The communication rate also has a significant impact on node
performance. Higher communication rates translate into the ability to achieve higher effective
sampling rates and lower network power consumption. As bit rates increase, transmissions take
less time and therefore potentially require less energy. However, an increase in radio bit rate is
often accompanied by an increase in radio power consumption. All things being equal, a higher
transmission bit rate will result in higher system performance. However, we show later that an
increase in the communication bit rate has a significant impact on the power consumption and
computational requirement of the node. In total, the benefits of an increase in bit rate can be
offset by several other factors.

1.3.6

Computation

The two most computationally intensive operations for a wireless sensor node are the in-network
data processing and the management of the low-level wireless communication protocols. As we
discuss later, there are strict real-time requirements associated with both communication and
sensing. As data is arriving over the network, the CPU must simultaneously control the radio and
record/decode the incoming data. Higher communication rates required faster computation. The
same is true for processing being performed on sensor data. Analog sensors can generate
thousands of samples per second. Common sensor processing operations include digital filtering,
averaging, threshold detection, correlation and spectral analysis. 30 It may even be necessary to
perform a real-time FFT on incoming data in order to detect a high-level event. In addition to
being able to locally process, refine and discard sensor readings, it can be beneficial to combine
data with neighboring sensors before transmission across a network. Just as complex sensor
waveforms can be reduced to key events, the results from multiple nodes can be synthesized
together. This in-network processing requires additional computational resources. In our
experience, 2-4 MIPS of processing are required to implement the radio communication
protocols used in wireless sensor networks. Beyond that, the application data processing can
consume an arbitrary amount of computation depending on the calculations being performed.

1.3.7

Time Synchronization

In order to support time correlated sensor readings and low-duty cycle operation of our data
collection application scenario, nodes must be able to maintain precise time synchronization with
other members of the network. Nodes need to sleep and awake together so that they can
periodically communicate. Errors in the timing mechanism will create inefficiencies that result in
increased duty cycles. In distributed systems, clocks drift apart over time due to inaccuracies in
timekeeping mechanisms. Depending on temperature, voltage, humidity, time keeping oscillators
operate at slightly different frequencies. High-precision synchronization mechanisms must be
provided to continually compensate for these inaccuracies.

1.3.8

Size & Cost

The physical size and cost of each individual sensor node has a significant and direct impact on
the ease and cost of deployment. Total cost of ownership and initial deployment cost are two key
factors that will drive the adoption of wireless sensor network technologies. In data collection
networks, researchers will often be operating off of a fixed budget. Their primary goal will be to
collect data from as many locations as possible without exceeding their fixed budget. A reduction
in per-node cost will result in the ability to purchase more nodes, deploy a collection network
with higher density, and collect more data. Physical size also impacts the ease of network
deployment. Smaller nodes can be placed in more locations and used in more scenarios. In the
node tracking scenario, smaller, lower cost nodes will result in the ability to track more objects.

1.4 Hardware Capabilities Now that we have identified the key characteristics of a wireless
sensor node we can look at the capabilities of modern hardware. This allows us to understand
what bit rate, power consumption, memory and cost we can expect to achieve. A balance must be
maintained between capability, power consumption and size in order to best address application
needs. This section gives a quick overview of modern technology and the tradeoffs between
different technologies. We start with a background of energy storage technologies and continue
through the radio, CPU, and sensors.

1.5 Sinkholes attack


Sinkhole Attack Sinkhole attack feigns that attacking node is located on the shortest path that
proceeds to important node or destination node such as Base Station. This attack can exert big
negative impact to network even if there is just one attacking node. Specially, in the case of
dynamic routing protocol, which is designed to achieve automatic path discovery and
maintenance between sensors according to the circumstances of the network, sinkhole attack has
severe effects. Because, these protocols collect network information periodically and decide
routing path and in the presence of sinkhole whole network can be compromised. Fig. 1 depicts
the network state with Sinkhole attacks. This state is easy to be extended to attack of various
forms including wormhole.

EXAMPLE OF SHINKHOLE ATTACK

1.5.1 Sinkhole attack detection


Existent sinkhole attack detection technique supposes hopcount based routing. Also, existent
detection method supposes that all sensor nodes transmit data to the Base Station periodically.
Selective Forwarding is one of the attacks which can ripple high its effect if it is cast with
Sinkhole attack. In Selective Forwarding, malicious node does not deliver some of the packets
that pass through it, deliberately. In the case of this attack, Base Station can make a list of nodes
which are not transmitting the data during some predefined period. Base Station gathers Nexthop information from all other nodes which are located in the attacking area. And reconstruct the
network topology. For example, it can judge that node that is located in top-level in network tree
is sinkhole attack node. However, in the case of this detection method, Base Station cannot detect
sinkhole attack though detection of additional attack that malicious node can achieve (Selective
Forwarding in above situation) can be detected. In other words, Base Station cannot judge
sinkhole attack presence if do not detect attack achieved with sinkhole attack. Also, it cannot
apply to LQI based mesh routing protocol. In addition, sensor nodes are exposed to attack before
sinkhole attack is detected. Therefore, in this paper, we propose sinkhole attack detection method
differing with existent method.

1.5.2 Trust Based Sinkhole Detection


A. Dynamic Source Routing (DSR) Protocol The DSR protocol is a reactive routing
protocol. As the name suggests it uses IP source routing. All data packets are affixed with
a DSR Source Route header that contains the complete list of nodes that a packet has to
traverse in order to reach a particular destination. Each intermediate node, upon receiving
a data packet, forwards the packet to the next hop as listed in the Source Route header.
During route discovery, the source node broadcasts a ROUTE REQUEST packet with a
unique identification number. The ROUTE REQUEST packet contains the address of the
target node to which a route is desired. All nodes that have no information regarding the
target node or have not seen the same ROUTE REQUEST packet append their IP
addresses to the ROUTE REQUEST packet and re-broadcast it. In order to control the
spread of the ROUTE REQUEST packets, the broadcast is done in a non-propagating
manner with the IP TTL field being incremented in each route discovery. The ROUTE
REQUEST packets keep on spreading until the time they reach the target node or any
other node that has a route to the target node. The recipient node creates a ROUTE
REPLY packet, which contains the complete list of nodes that the ROUTE REQUEST
packet had traversed. Based upon implementation, the target node may respond to one or
more incoming ROUTE REQUEST packets. Similarly, the source node may accept one
or more ROUTE REPLY packets for a single target node. In this paper, we have used

multi-path DSR in which each ROUTE REQUEST packet received by the destination is
responded to by an independent ROUTE REPLY packet. For optimization reasons, nodes
maintain a PATH CACHE or a LINK CACHE scheme . All nodes either forwarding or
overhearing data and control packets, add all useful information to their respective route
cache. This information is used to limit spread of control packets for subsequent route
discoveries. For example, if an intermediate node receives a packet for which its next hop
is not available, it may drop the packet and inform the sender. However, if it has a route
to the final recipient, it can Salvage that route from its own cache, send the packet on to
the new route and inform the sender about the failed link through a ROUTE ERROR
packet.
B. Attack Pattern In the sinkhole hole attack, in order to attract network traffic, a malicious
node fabricates or generates fallacious routing packets, which portray a shorter route to a
particular destination. The naive nodes, upon receipt of these packets, re-route their
current or subsequent traffic through these sinkholes. The malicious node then uses its
discretion to selectively dump or modify the data packets that pass through it. The
creation of a sinkhole only requires a single malicious or compromised node. In contrast,
the creation of a wormhole entails the help of two or more colluding nodes. In any sensor
network, a wormhole can be produced through the following three ways:
1) Tunneling of packets above the network layer
2) Long range tunnel using high power transmitters
3) Tunnel creation via wired infrastructure In the first type of wormhole, all packets which
are received by a malicious node are duly modified, encapsulated in a higher layer protocol
and dispatched to the colluding node, using the services of the other network nodes. These
encapsulated packets traverse the network in the regular manner until they reach the

collaborating node. The recipient malicious node, extracts the original packet, modifies it
accordingly and sends them to the intended destination. In the second and third type of
wormholes, the packets are modified and encapsulated in a similar manner, however instead
of being dispatched through the network nodes, they are sent using a point-to-point
specialized link between the colluding nodes.
C. Trust Model To detect and evade sinkholes and wormholes in the network, we make use of
an effort-return based trust model. The trust model uses the inherent features of the Dynamic
Source Routing (DSR) protocol to derive and compute respective trust levels in other nodes.
For correct execution of the model, the following conditions must be met by all participating
nodes in the sensor network:
All nodes must support promiscuous mode operation
Node transceivers are omnidirectional and that they can receive and transmit in all
directions
The transmission and reception ranges of all transceivers in the network are comparable
each node executing the trust model, measures the accuracy and sincerity of its immediate
neighboring nodes by monitoring their participation in the packet forwarding mechanism.
The sending node verifies the different fields in the forwarded IP packet for requisite
modifications through a sequence of integrity checks. If the integrity checks succeed, it
confirms that the node has acted in a benevolent manner and so its direct trust counter is
incremented. Similarly, if the integrity check fail or the forwarding node does not transmit
the packet at all, its corresponding direct trust measure is decremented. We represent the
direct trust in a node y by node x as Txy and is given by the following equation: Txy = PP .
PA where PP [0, 1], represents the situational trust category Packet Precision, which

essentially indicates the existence or absence of a wormhole through node y. PA represents


the situational trust category Packet Acknowledgements that preserves a count of the number
of packets that have been forwarded by a node and helps to identify sinkholes. The category
PP and PA are employed in combination to protect the DSR protocol against wormhole and
sinkhole attacks respectively. We refer to this modified DSR protocol as the DSR-mod
protocol hereafter.
D. Detection Process Each node before transmission of a data packet, buffers the DSR
Source Route header. After transmitting the packet, the node places its wireless interface into
the promiscuous mode for the Trust Update Interval (TUI). The TUI fundamentally
represents the time a sending node must wait after transmitting a packet until the time it
overhears the retransmission by its neighbour. This interval is critically related to the
mobility and traffic of the network and needs to be set accordingly. If this interval is made
too small it may result in ignoring of the retransmissions by an inefficient neighbour.
Similarly a large TUI value may augment energy costs as well as induce errors due to nodes
getting out of reception range. If during the TUI, the node is able to overhear its immediate
node retransmit the same packet, the sending node increases the situational trust category PA
for that neighbour indicating absence of the sinkhole. It then verifies whether the
retransmitted packets DSR Source Route header is the same as the one that was buffered
earlier. If the Salvage field1 of the DSR Source Route option is zero, then these list of
addresses should exactly be the same. If this integrity check passes, the situational trust
category PP is not set, indicating the absence of a wormhole. However, if the retransmitting
node, modifies the DSR Source Route header, the detecting node sets PP to true. In case no
retransmission is heard and a timeout occurs when the TUI expires, the situational trust

category PA for that neighbour is decremented and the DSR Source Route buffer is cleared.
E. Evasion Process In the standard DSR, before initiating a new route discovery, the cache is
first scanned for a working route to the destination. In the event of unavailability of a route
from the cache, the ROUTE REQUEST packet is propagated. When the search is made for a
route in the cache, the Leader based algorithm algorithm is executed that returns the shortest
path to any destination in terms of number of hops. In the LINK CACHE scheme the default
cost of each link is one. We modify this cost in DSR mod and instead replace it with the trust
level of the node that acts as the link destination. In case the status of the link end node is
classified as a wormhole, the cost of that link is set to infinity. Consequently, each time a new
route is required, a modified variant of the search algorithm is executed, which finds routes
with the maximum trust level, thereby evading any possible sinkholes and wormholes. Nodes
in a sensor network come into contact with other nodes in the network via their immediate
neighborhood. This neighborhood varies with the mobility of the node itself and that of the
other nodes in the network. However, for static sensor networks the immediate neighborhood
doesnt change and so the behavior of the nodes beyond a single hop cannot be directly
determined. The direct trust values can be shared among neighbors using a higher layer
Reputation Exchange Protocol or as an integral component of the underlying routing
protocol. However, the sharing of trust reputations is vulnerable to deception where a
malicious node may upgrade its own reputation or degrade the reputation of an existing
trustworthy node. Depending on the mobility pattern of the network, there may be
circumstances in which the source node may not have suffi- cient trust information regarding
all the nodes in the computed path. To deal with such situations, we implement a salvaging
mechanism in DSR-mod where instead of checking only the connectivity of the next hop, the

forwarding nodes also verify the trust levels of all nodes present in the packets Source Route
header. With the standard DSR protocol, all intermediate nodes blindly forward the packets
to the succeeding nodes listed in the Source Route header. However, in the the DSRmod
protocol, the trust level of all the remaining nodes in the Source Route is first verified for the
existence of a sinkhole or a wormhole. Only in case of absence of such malicious nodes, the
packets are forwarded as per the Source Route header. However, in case where malicious
nodes are present in the Source Route header, that particular packet is dropped and a
corresponding ROUTE ERROR packet is sent to the originator of the data packet.

CHAPTER 2
LITRATURE REVIEW

Advances in wireless communication and electronics have enabled the development of low-cost,
low power, multifunctional sensor nodes. These tiny sensor nodes, consisting of sensing, data
processing, and communication components, make it possible to deploy Wireless Sensor
Networks (WSNs), which represent a significant improvement over traditional wired sensor
networks. WSNs can greatly simplify system design and operation, as the environment being
monitored does not require the communication or energy infrastructure associated with wired
networks. WSNs are expected to be solutions to many applications, such as detecting and
tracking the passage of troops and tanks on a battlefield, monitoring environmental pollutants,
measuring traffic flows on roads, and tracking the location of personnel in a building. Many
sensor networks have mission-critical tasks and thus require that security be considered.
Improper use of information or using forged information may cause unwanted information

leakage and provide inaccurate results. While some aspects of WSNs are similar to traditional
wireless ad hoc networks, important distinctions exist which greatly affect how security is
achieved. The differences between sensor networks and ad hoc networks are]:
The number of sensor nodes in a sensor network can be several orders of magnitude higher than
the nodes in an ad hoc network.
Sensor nodes are densely deployed.
Sensor nodes are prone to failures due to harsh environments and energy constraints.
The topology of a sensor network changes very frequently due to failures or mobility.
Sensor nodes are limited in computation, memory, and power resources.
Sensor nodes may not have global identification. These differences greatly affect how secure
data-transfer schemes are implemented in WSNs. For example, the use of radio transmission,
along with the constraints of small size, low cost, and limited energy, make WSNs more
susceptible to denial-of-service attacks. Advanced anti-jamming techniques such as frequencyhopping spread spectrum and physical tamper-proofing of nodes are generally impossible in a
sensor network due to the requirements of greater design complexity and higher energy
consumption. Furthermore, the limited energy and processing power of nodes makes the use of
public key cryptography nearly impossible. While the results from recent studies show that
public key cryptography might be feasible in sensor networks, it remains for the most part
infeasible in WSNs. Instead, most security schemes make use of symmetric key cryptography.
One thing required in either case is the use of keys for secure communication. Managing key
distribution is not unique to WSNs, but again constraints such as small memory capacity make
centralized keying techniques impossible. Straight pairwise key sharing between every two
nodes in a network does not scale to large networks with tens of thousands of nodes, as the

storage requirements are too high. A security scheme in WSNs must provide efficient key
distribution while maintaining the ability for communication between all relevant nodes. In
addition to key distribution, secure routing protocols must be considered. These protocols are
concerned with how a node sends messages to other nodes or a base station. A key challenge is
that of authenticated broadcast. Existing authenticated broadcast methods often rely on public
key cryptography and include high computational overhead making them infeasible in WSNs.
Secure routing protocols proposed for use in WSNs, such as SPINS , must consider these factors.
Additionally, the constraint on energy in WSNs leads to the desire for data aggregation. This
aggregation of sensor data needs to be secure in order to ensure information integrity and
confidentiality. While this is achievable through cryptography, an aggregation scheme must take
into account the constraints in WSNs and the unique characteristics of the cryptography and
routing schemes. It is also desirable for secure data aggregation protocols to be flexible, allowing
lower levels of security for less important data, thus saving energy, and allowing higher levels of
security for more sensitive data, thus consuming more energy. As with any network, awareness
of compromised nodes and attacks is desirable. Many security schemes provide assurance that
data remain intact and communication unaffected as long as fewer than t nodes are compromised.
The ability of a node or base station to detect when other nodes are compromised enables them
to take action, either ignoring the compromised data or reconfiguring the network to eliminate
the threat. The remainder of this article discusses the above areas in more detail and considers
how they are all required to form a complete WSN security scheme. A few existing surveys on
security issues in ad hoc networks can be found; however, only small sections of these surveys
focus on WSNs. A recent survey article on security issues in mobile ad hoc networks also
included an overview of security issues in WSNs. However, the article did not discuss

cryptography and intrusion detection issues. Further, it included only a small portion of the
available literature on security in WSNs. The rest of this article is organized as follows.
Background information on WSNs is presented, followed by a discussion of attacks in the
different network layers of sensor networks. Then we focus on the selection of cryptography in
WSNs, key management, secure routing schemes, secure data aggregation, and intrusion
detection systems.

2.1 Communication Architecture


A WSN is usually composed of hundreds or thousands of sensor nodes. These sensor nodes are
often densely deployed in a sensor field and have the capability to collect data and route data
back to a base station (BS). A sensor consists of four basic parts: a sensing unit, a processing
unit, a transceiver unit, and a power unit . It may also have additional application-dependent
components such as a location finding system, power generator, and mobilizer. Sensing units are
usually composed of two subunits: sensors and analog-to-digital converters (ADCs). The ADCs
convert the analog signals produced by the sensors to digital signals based on the observed
phenomenon. The processing unit, which is generally associated with a small storage unit,
manages the procedures that make the sensor node collaborate with the other nodes. A
transceiver unit connects the node to the network. One of the most important units is the power

unit. A power unit may be finite (e.g., a single battery) or may be supported by power scavenging
devices (e.g., solar cells). Most of the sensor network routing techniques and sensing tasks
require knowledge of location, which is provided by a location finding system. Finally, a
mobilizer may sometimes be needed to move the sensor node, depending on the application. The
protocol stack used in sensor nodes contains physical, data link, network, transport, and
application layers defined as follows:
Physical layer: responsible for frequency selection, carrier frequency generation, signal
deflection, modulation, and data encryption
Data link layer: responsible for the multiplexing of data streams, data frame detection, medium
access, and error control; as well as ensuring reliable point-to-point and point-to-multipoint
connections
Network layer: responsible for specifying the assignment of addresses and how packets are
forwarded
Transport layer: responsible for specifying how the reliable transport of packets will take place
Application layer: responsible for specifying how the data are requested and provided for both
individual sensor nodes and interactions with the end user.

2.2 Constraints in WSNs


Individual sensor nodes in a WSN are inherently resource constrained. They have limited
processing capability, storage capacity, and communication bandwidth. Each of these limitations
is due in part to the two greatest constraints limited energy and physical size. Table 1 shows
several currently available sensor node platforms. The design of security services in WSNs must
consider the hardware constraints of the sensor nodes:
Energy: energy consumption in sensor nodes can be categorized into three parts: Energy for
the sensor transducer Energy for communication among sensor nodes Energy for
microprocessor computation The study found that each bit transmitted in WSNs consumes about
as much power as executing 8001000 instructions. Thus, communication is more costly than
computation in WSNs. Any message expansion caused by security mechanisms comes at a

significant cost. Further, higher security levels in WSNs usually correspond to more energy
consumption for cryptographic functions. Thus, WSNs can be divided into different security
levels, depending on energy cost.
Computation: the embedded processors in sensor nodes are generally not as powerful as those
in nodes of a wired or ad hoc network. As such, complex cryptographic algorithms cannot be
used in WSNs.
Memory: memory in a sensor node usually includes flash memory and RAM. Flash memory is
used for storing downloaded application code and RAM is used for storing application programs,
sensor data, and intermediate computations. There is usually not enough space to run
complicated algorithms after loading OS and application code. In the Smart Dust project, for
example, TinyOS consumes about 3500 bytes of instruction memory, leaving only 4500 bytes for
security and applications. This makes it impractical to use the majority of current security
algorithms]. With an Intel Mote, the situation is slightly improved, but still far from meeting the
requirements of many algorithms.
Transmission range: the communication range of sensor nodes is limited both technically and
by the need to conserve energy. The actual range achieved from a given transmission signal
strength is dependent on various environmental factors such as weather and terrain.

2.3 Security Requirements


The goal of security services in WSNs is to protect the information and resources from attacks
and misbehavior. The security requirements in WSNs include:
Availability, which ensures that the desired network services are available even in the presence
of denial-of-service attacks
Authorization, which ensures that only authorized sensors can be involved in providing
information to network services
Authentication, which ensures that the communication from one node to another node is
genuine, that is, a malicious node cannot masquerade as a trusted network node
Confidentiality, which ensures that a given message cannot be understood by anyone other than
the desired recipients

Integrity, which ensures that a message sent from one node to another is not modified by
malicious intermediate nodes
Nonrepudiation, which denotes that a node cannot deny sending a message it has previously
sent
Freshness, which implies that the data is recent and ensures that no adversary can replay old
messages Moreover, as new sensors are deployed and old sensors fail, we suggest that forward
and backward secrecy should also be considered:
Forward secrecy: a sensor should not be able to read any future messages after it leaves the
network.
Backward secrecy: a joining sensor should not be able to read any previously transmitted
message. The security services in WSNs are usually centered around cryptography. However,
due to the constraints in WSNs, many already existing secure algorithms are not practical for use.
2.4 Threat Model
In WSNs, it is usually assumed that an attacker may know the security mechanisms that are
deployed in a sensor network; they may be able to compromise a node or even physically capture
a node. Due to the high cost of deploying tamper resistant sensor nodes, most WSN nodes are
viewed as no tamper-resistant. Further, once a node is compromised, the attacker is capable of
stealing the key materials contained within that node.
Base stations in WSNs are usually regarded as trustworthy. Most research studies focus on secure
routing between sensors and the base station. Deng et al. considered strategies against threats
which can lead to the failure of the base station . Attacks in sensor networks can be classified
into the following categories:
Outsider versus insider attacks: outside attacks are defined as attacks from nodes which do not

belong to a WSN; insider attacks occur when legitimate nodes of a WSN behave in unintended
or unauthorized ways.
Passive versus active attacks: passive attacks include eavesdropping on or monitoring packets
exchanged within a WSN; active attacks involve some modifications of the data steam or the
creation of a false stream.
Mote-class versus laptop-class attacks: in mote-class attacks, an adversary attacks a WSN by
using a few nodes with similar capabilities to the network nodes; in laptop-class attacks, an
adversary can use more powerful devices (e.g., a laptop) to attack a WSN. These devices have
greater transmission range, processing power, and energy reserves than the network nodes.

EVALUATION suggest using the following metrics to evaluate whether a security scheme is
appropriate in WSNs.
Security: a security scheme has to meet the requirements discussed above. Resiliency: in case
a few nodes are compromised, a security scheme should still protect against the attacks.
Energy efficiency: a security scheme must be energy efficient so as to maximize node and
network lifetime.
Flexibility: key management needs to be flexible so as to allow for different network
deployment methods, such as random node scattering and predetermined node placement.
Scalability: a security scheme should be able to scale without compromising the security
requirements.
Fault-tolerance: a security scheme should continue to provide security services in the presence

of faults such as failed nodes.


Self-healing: sensors may fail or run out of energy. The remaining sensors may need to be
reorganized to maintain a set level of security.
Assurance: assurance is the ability to disseminate different information at different levels to
end-users. A security scheme should offer choices with regard to desired reliability, latency, and
so on.

2.5 Attack in sensor networks


WSNs are vulnerable to various types of attacks. According to the security requirements in
WSNs, these attacks can be categorized as :
Attacks on secrecy and authentication: standard cryptographic techniques can protect the
secrecy and authenticity of communication channels from outsider attacks such as
eavesdropping, packet replay attacks, and modification or spoofing of packets. Attacks on
network availability: attacks on availability are often referred to as denial-of-service (DoS)
attacks. DoS attacks may target any layer of a sensor network.
Stealthy attacks against service integrity: in a stealthy attack, the goal of the attacker is to make
the network accept a false data value. For example, an attacker compromises a sensor node and
injects a false data value through that sensor node. In these attacks, keeping the sensor network

available for its intended use is essential. DoS attacks against WSNs may permit real-world
damage to the health and safety of people . In this section, we focus only on DoS attacks and
their countermeasures in sensor networks. We discuss attacks on secrecy and authentication in
the section Secure Routing Protocols, and discuss stealthy attacks and countermeasures in the
section Intrusion Detection below. The DoS attack usually refers to an adversarys attempt to
disrupt, subvert, or destroy a network. However, a DoS attack can be any event that diminishes
or eliminates a networks capacity to perform its expected function . Sensor networks are usually
divided into layers, and this layered architecture makes WSNs vulnerable to DoS attacks, as DoS
attacks may occur in any layer of a sensor network.
PHYSICAL LAYER The physical layer is responsible for frequency selection, carrier frequency
generation, signal detection, modulation, and data encryption . As with any radio-based medium,
there exists the possibility of jamming in WSNs. In addition, nodes in WSNs may be deployed in
hostile or insecure environments where an attacker has easy physical access. These two
vulnerabilities are explored in this subsection.
Jamming Jamming is a type of attack which interferes with the radio frequencies that a
networks nodes are using. A jamming source may either be powerful enough to disrupt the entire
network or less powerful and only able to disrupt a smaller portion of the network. Even with
lesser-powered jamming sources, such as a small compromised subset of the networks sensor
nodes, an adversary has the potential to disrupt the entire network provided the jamming sources
are randomly distributed in the network. Typical defenses against jamming involve variations of
spread-spectrum communication such as frequency hopping and code spreading. Frequencyhopping spread spectrum (FHSS) is a method of transmitting signals by rapidly switching a
carrier among many frequency channels using a pseudo random sequence known to both

transmitter and receiver. Without being able to follow the frequency selection sequence, an
attacker is unable to jam the frequency being used at a given moment in time. However, as the
range of possible frequencies is limited, an attacker may instead jam a wide section of the
frequency band. Code spreading is another technique used to defend against jamming attacks and
is common in mobile networks. However, this technique requires greater design complexity and
energy, thus restricting its use in WSNs. In general, to maintain low cost and low power
requirements, sensor devices are limited to single-frequency use and are therefore highly
susceptible to jamming attacks.
Tampering Another physical layer attack is tampering. Given physical access to a node, an
attacker can extract sensitive information such as cryptographic keys or other data on the node.
The node may also be altered or replaced to create a compromised node which the attacker
controls. One defense to this attack involves tamper-proofing the nodes physical package [5].
However, it is usually assumed that the sensor nodes are not tamper-proofed in WSNs due to the
additional cost. This indicates that a security scheme must consider the situation in which sensor
nodes are compromised.
LINK LAYER The data link layer is responsible for the multiplexing of data streams, data frame
detection, medium access, and error control. It ensures reliable point-to-point and point-tomultipoint connections in a communication network. Attacks at the link layer include purposely
introduced collisions, resource exhaustion, and unfairness. This subsection looks at each of these
three link-layer attack categories.
Collisions A collision occurs when two nodes attempt to transmit on the same frequency
simultaneously. When packets collide, a change will likely occur in the data portion, causing a
checksum mismatch at the receiving end. The packet will then be discarded as invalid. An

adversary may strategically cause collisions in specific packets such as ACK control messages. A
possible result of such collisions is the costly exponential back-off in certain media access
control (MAC) protocols. A typical defense against collisions is the use of error-correcting codes.
Most codes work best with low levels of collisions, such as those caused by environmental or
probabilistic errors. However, these codes also add additional processing and communication
overhead. It is reasonable to assume that an attacker will always be able to corrupt more than
what can be corrected. While it is possible to detect these malicious collisions, no complete
defenses against them are known at this time.
Exhaustion Repeated collisions can also be used by an attacker to cause resource exhaustion.
For example, a naive link-layer implementation may continuously attempt to retransmit the
corrupted packets. Unless these hopeless retransmissions are discovered or prevented, the energy
reserves of the transmitting node and those surrounding it will be quickly depleted. A possible
solution is to apply rate limits to the MAC admission control such that the network can ignore
excessive requests, thus preventing the energy drain caused by repeated transmissions. A second
technique is to use time-division multiplexing where each node is allotted a time slot in which it
can transmit. This eliminates the need of arbitration for each frame and can solve the indefinite
postponement problem in a back-off algorithm. However, it is still susceptible to collisions.
Unfairness Unfairness can be considered a weak form of a DoS attack. An attacker may cause
unfairness in a network by intermittently using the above link-layer attacks. Instead of preventing
access to a service outright, an attacker can degrade it in order to gain an advantage such as
causing other nodes in a real-time MAC protocol to miss their transmission deadline. The use of
small frames lessens the effect of such attacks by reducing the amount of time an attacker can
capture the communication channel. However, this technique often reduces efficiency and is

susceptible to further unfairness, for example, when an attacker is trying to retransmit quickly
instead of randomly delaying. NETWORK AND ROUTING LAYER The network and routing
layer of sensor networks is usually designed according to the following principles [4]: Power
efficiency is an important consideration. Sensor networks are mostly data-centric. An ideal
sensor network has attribute-based addressing and location awareness. The attacks in the network
and the routing layer include the following. Spoofed, Altered, or Replayed Routing Information
The most direct attack against a routing protocol in any network is to target the routing
information itself while it is being exchanged between nodes. An attacker may spoof, alter, or
replay routing information in order to disrupt traffic in the network. These disruptions include the
creation of routing loops, attracting or repelling network traffic from select nodes, extending and
shortening source routes, generating fake error messages, partitioning the network, and
increasing end-to-end latency. A countermeasure against spoofing and alteration is to append a
message authentication code (MAC) after the message. By adding a MAC to the message, the
receivers can verify whether the messages have been spoofed or altered. To defend against
replayed information, counters or timestamps can be included in the messages [8]. Selective
Forwarding A significant assumption made in multihop networks is that all nodes in the
network will accurately forward received messages. An attacker may create malicious nodes
which selectively forward only certain messages and simply drop others . A specific form of this
attack is the black hole attack in which a node drops all messages it receives. One defense
against selective forwarding attacks is using multiple paths to send data . A second defense is to
detect the malicious node or assume it has failed and seek an alternative route.
Sinkhole In a sinkhole attack, an attacker makes a compromised node look more attractive to
surrounding nodes by forging routing information. The end result is that surrounding nodes will

choose the compromised node as the next node to route their data through. This type of attack
makes selective forwarding very simple, as all traffic from a large area in the network will flow
through the adversarys node.
Sybil The Sybil attack is a case where one node presents more than one identity to the
network. Protocols and algorithms which are easily affected include fault-tolerant schemes,
distributed storage, and network-topology maintenance. For example, a distributed storage
scheme may rely on there being three replicas of the same data to achieve a given level of
redundancy. If a compromised node pretends to be two of the three nodes, the algorithms used
may conclude that redundancy has been achieved while in reality it has not.

Wormholes A wormhole is a low-latency link between two portions of the network over
which an attacker replays network messages. This link may be established either by a single node
forwarding messages between two adjacent but otherwise non-neighboring nodes or by a pair of
nodes in different parts of the network communicating with each other. The latter case is closely
related to the sinkhole attack, as an attacking node near the base station can provide a one-hop
link to that base station via the other attacking node in a distant part of the network. Hu et al.
presented a novel and general mechanism called packet leashes for detecting and defending
against wormhole attacks. Two types of leashes were introduced: geographic leashes and
temporal leashes. The proposed mechanisms can also be used in WSNs.
Hello Flood Attacks Many protocols which use HELLO packets make the naive assumption
that receiving such a packet means the sender is within radio range and is therefore a neighbor.

An attacker may use a high-powered transmitter to trick a large area of nodes into believing they
are neighbors of that transmitting node. If the attacker falsely broadcasts a superior route to the
base station, all of these nodes will attempt transmission to the attacking node, despite many
being out of radio range in reality.
Acknowledgment Spoofing Routing algorithms used in sensor networks sometimes require
Acknowledgments to be used. An attacking node can spoof the Acknowledgments of overheard
packets destined for neighboring nodes in order to provide false information to those neighboring
nodes. An example of such false information is claiming that a node is alive when in fact it is
dead.

TRANSPORT LAYER The transport layer is responsible for managing end-to-end connections.
Two possible attacks in this layer, flooding and desynchronization, are discussed in this
subsection.

2.6 Protocol Management Key


Key management is a core mechanism to ensure the security of network services and
applications in WSNs. The goal of key management is to establish required keys between sensor
nodes which must exchange data. Further, a key management scheme should also support node
addition and revocation while working in undefined deployment environments. Due to the
constraints on sensor nodes, key management schemes in WSNs have many differences with the
schemes in ad hoc networks. As shown above, public key cryptography suffers from limitations
in WSNs. Thus, most proposed key management schemes are based on symmetric key
cryptography. Further, a straight pairwise private key sharing scheme between every pair of
nodes is also impractical in WSNs. A pairwise private key sharing scheme requires
predistribution and storage of n 1 keys in each node, where n is the number of nodes in a

sensor network. Due to the large amount of memory required, pairwise schemes are not viable
when the network size is large. Moreover, most key pairs would be unusable since direct
communication is possible only among neighboring nodes. This scheme is also not flexible for
node addition and revocation. In this section, we discuss key management protocols in WSNs.
Another investigation of key management mechanisms for WSNs a taxonomy of key
management protocols in WSNs. According to the network structure, the protocols can be
divided into centralized key schemes and distributed key schemes. According to the probability
of key sharing between a pair of sensor nodes, the protocols can be divided into probabilistic key
schemes and deterministic key schemes. In this section, we present a detailed overview of the
main key management protocols in WSNs. We start with key management protocols based on
network structure.

2.7 Network Structure Based Key Management Protocols


The underlying network structure plays a significant role in the operation of key management
protocols. According to the structure, the protocols can be divided into two categories:
centralized key schemes and distributed key schemes.
Centralized Key Management Schemes In a centralized key scheme, there is only one entity,
often called a key distribution center (KDC), that controls the generation, regeneration, and
distribution of keys. The only proposed centralized key management scheme for WSNs in the
current literature is the LKHW scheme, which is based on the Logical Key Hierarchy (LKH) . In
this scheme, the base station is treated as a KDC and all keys are logically distributed in a tree
rooted at the base station. The central controller does not have to rely on any auxiliary entity to
perform access control and key distribution. However, with only one managing entity, the central

server is a single point of failure. The entire network and its security will be affected if there is a
problem with the controller. During the time when the controller is not working, the network
becomes vulnerable as keys are not generated, regenerated, and distributed. Furthermore, the
network may become too large to be managed by a single entity, thus affecting scalability.
Distributed Key Management Schemes In the distributed key management approaches,
different controllers are used to manage key generation, regeneration, and distribution, thus
minimizing the risk of failure and allowing for better scalability. In this approach, more entities
are allowed to fail before the whole network is affected. Most proposed key management
schemes are distributed schemes. These schemes also fall into deterministic and probabilistic
categories, which are discussed in detail in the following subsection.

2.8 Secure Routing Protocol


Routing protocols have been specifically designed for WSNs. These routing protocols can be
divided into three categories according to the network structure: flat-based routing, hierarchicalbased routing, and location-based routing. In flat-based routing, all nodes are typically assigned
equal roles or functionality. In hierarchical-based routing, nodes play different roles in the
network. In location-based routing, sensor node positions are used to route data in the network.
Although many sensor network routing protocols have been proposed in literature, few of them
have been designed with security as a goal. Lacking security services in the routing protocols,
WSNs are vulnerable to many kinds of attacks. Most network layer attacks against sensor
networks fall into one of the categories described above, namely: Spoofed, altered, or replayed
routing information Selective forwarding Sinkhole Sybil Wormholes Hello flood attacks

Acknowledgment spoofing These attacks may be applied to compromise the routing protocols in
a sensor network. For example, directed diffusion is a flat-based routing algorithm for drawing
information from a sensor network. In directed diffusion, sensors measure events and create
gradients of information in their respective neighboring nodes. The base station requests data by
broadcasting interest which describes a task to be conducted by the network. The interest is
diffused through the network hop by hop, and broadcasted by each node to its neighbors. As the
interest is propagated throughout the network, gradients are setup to draw data satisfying the
query towards the requesting node. Each sensor that receives the interest sets up a gradient
toward the sensor nodes from which it received the interest. This process continues until
gradients are setup from the sources back to the base station. Interests initially specify a low rate
of data flow, but once a base station starts receiving events it will reinforce one or more
neighboring nodes in order to request higher data rate events. This process proceeds recursively
until it reaches the nodes generating events, causing them to generate events at a higher data rate.
Paths may also be negatively reinforced. Directed diffusion is vulnerable to many kinds of
attacks if authentication is not included in the protocol. For example, it is easy for an adversary
to add himself/herself onto the path taken by a flow of events as described in the following: The
adversary can influence the path by spoofing positive reinforcements. After receiving and
rebroadcasting an interest, an adversary could strongly reinforce the nodes to which the interest
was sent while spoofing high-rate, low-latency events to the nodes from which the interest was
received. The adversary can replay the interests intercepted from a legitimate base station and
list himself/herself as a base station. All events satisfying the interest will then be sent to both the
adversary and the legitimate base station. By using the attacks above, the adversary can add
himself/ herself onto the path and thus gain full control of the flow. The adversary can eavesdrop,

modify, and selectively forward packets of his/her choosing. He/she can drop all forwarded
packets and act as a sinkhole. Further, a laptop-class adversary can exert great influence on the
topology by using a wormhole attack. The adversary creates a tunnel between a node located
near a base station and a node located close to where events are likely to be generated. By
spoofing positive or negative reinforcements, the adversary can push data flows away from the
base station and towards the nodes selected by the adversary. Hierarchical and location based
routing protocols not incorporating security services are also vulnerable to many attacks.

2.8 Leader Based Monitoring Approach For Sinkhole Attack


Udaya Suriya Rajkumar. D., et al., [1 proposed a LBIDS (Leader Based Intrusion Detection
System) solution to detect and defend against the sinkhole attack in WSN. The proposed solution
consists of three algorithms a Leader Election Algorithm, Algorithm for Avoid Malicious,
CheckIDS Algorithm. In this approach a region wise leader is elected for each group nodes
within the network. That leader performs the intrusion detection mechanism by comparing and
manipulating the behavior of each node within the cluster and monitors each node behavior for
any sinkhole attack to occur. When a compromised node gets detected the leader informs other
leader within the WSN, about the sinkhole node so all the leaders in the network stops
communication with that particular sinkhole Node. The energy efficiency and intrusion detection
rate is high.

C. Sheela., et al., [2] proposed routing algorithm based on mobile agents to defend against
sinkhole attacks in WSN. Mobile agent is a self controlling software program that visits
every node in the network either periodical or on required. By using the collected
information the mobile agents make every node alert of the entire network so that a valid
node would not listen to the wrong information from malicious or compromised node
which leads to sinkhole attack. The important feature of the proposed mechanism is that
does not require any encryption or decryption mechanism for detecting the sinkhole
attack. Very less energy is enough for this mechanism than the normal routing protocols.
Maliheh Bahekmat., et al., [3] discussed about a novel algorithm for detecting sinkhole
attacks in WSNs in terms of energy consumption. The proposed algorithm works by
comparing the control fields of the received data packets with the original control packet,
whenever a node needs to send data to the BS, it first sends a control packet directly to the
main BS. Then it begins to send data packets in form of hop by hop routing to the BS. After
the data packet arrives at the BS, it compares the control fields of the received data packets
with the original control packet. If any manipulations have been detected to these control
fields or loss in the data packet, the BS detects that there is a malicious node in that path by
using the proposed strategy. Advantage of this method is very less energy consumed for the
detection mechanism. This algorithm can also be used for detection of wormhole attacks. The
performance of the proposed algorithm is examined in MAT lab stimulation.
Tejinderdeep Singh and Harpreet Kaur Arora proposed a solution for Sinkhole attacks
detection in WSN using Ad-hoc On-Demand Distance Vector (AODV) Routing Protocol.
This system consists of three steps. The sender node first requests the sequence number with
the rreq message, if the node replies its sequence number with rrep message. Transmitting

node will match sequence number in its routing table. If matches then data will be shared
otherwise it will be assign the sequence number to the node. If the node accepts the sequence
number then the node will enter in the network otherwise it will be eradicated from the
network.
S.Sharmila and Dr G Umamaheswari [5] proposed a solution for Detection of sinkhole attack
in wireless sensor networks using message digest algorithms. Detecting the exact sink hole
by using the one-way hash chains is the main aim of this protocol. In the proposed method
destination detects the attack only when the digest obtained from the trustable forward path
and the digest obtained through the trustable node to the destination are different. It also
ensures the data integrity of the messages transferred using the trustable path. The algorithm
is also robust to deal with cooperative malicious nodes that attempt to hide the real intruder.
The functionality of the proposed algorithm is tested in MAT lab stimulation.
Ahmad Salehi S., et al., [6] proposed a light weight Algorithm to detect the sinkhole attack
node in the WSN the algorithm consist of two step process. The first step is to find a list of
affected nodes in that area by checking the data consistency and the second step to then
effectively identifies the intruder in the list by analyzing the network flow information. The
algorithm is also robust to deal with multiple malicious nodes that cooperatively hide the real
intruder. The proposed algorithms performance has been evaluated by using numerical
analysis and simulations.
Murad A. Rassam., et al., [7] proposed fuzzy rules based detection mechanism of sinkhole in
Mintroute WSNs. This detection system is first distributed in each and every node to keep
monitoring the entire network which assures a high detection possibility; second the deciding
of finding the attacker is done by the sink by the cooperation mechanism after receiving id of

the suspected sinkhole from each node; which causes cutback of communication with all
sensor nodes by broadcasting the suspected nodes ID. In this system the sink is involved in
making the decision about the attack based on the alarms received from the nodes. This
scheme has the ability of detecting sinkhole attack in small scale WSNs.
It is cost effective and resource effective technique in which a leader is elected for solving the
IDS in WSN. The WSN area is split into regions. Each region is considered as sub network and
nodes is assigned with energy value 100 and base station is assigned highest energy value.
In the initial stage, there is a random node considered as a leader node and the other nodes as
regular nodes,While constructing the nodes,it has to register its information to the clusterhead. At
the time of data transaction the leader will be elected on the basis of highest energy.This
approach detects the intrusion on the basis of algorithm as explained below:-

Phase I: Leader Election Algorithm


1. Start procedure leader_election_model()
2. G = {N, E}, network G with N number of nodes are connected with edges E.
3. G = {{G1},{G2},{G3},....{Gi},....{Gm}}
4. Find center of G and elect a leader in that place as C
5. for i= 1 to m
6. N = {n1,n2,n3...ni,...nn } // number of nodes in group Gi
7. Assume Eo = 100, To =0; // initial energy to all nodes and time starts from 0.
8. At every time ti, calculate ei for all the nodes
9. Elect the cluster Ci = e(ni) > e(n1,n2,n3...nm)
10. Repeat step 7 and 8 for all the Gi

11. Call LBIDS()


12. End procedure
Phase II: Algorithm For Avoid Malicious
1. Start procedure LBIDS()
2. ni <- source node
3. nj<- destination node
4. Find route from nito nj
5. Let route R = {ni, na,nb,nc,....,nj}
6. Call checkIDS(R)
7. End if
Phase III: CheckIDS Algorithm
1. Start procedure checkIDS(R)
2. Route <- get nodes of R
3. Compare ID and location of route nodes
4. if ID, location exists in the info table
5. return " continue"
6. else
7. return "change the path"
8. end if

CHAPTER 3
RESEARCH METHODOLOGY

3.1 Research Design


For our research we will take up descriptive Research design as it answers the question what is
going on? A good description is a fundamental to the research enterprise and it adds
immeasurable of the shape and nature of the society.
Data Collection will be done in two phases:Preliminary Phase - In the initial phase we will try to understand SHINKHOLE ATTACK IN
WSN Below is the process we would be following:The methodology which will be used for carrying out the report is as follows:Type of Data Sources: For present research work, preliminary and secondary data will be used.

3.2 Tools for collecting Preliminary Data: - We are using NetSim tool to detection of Shinkhole
attack in WNS.
3.3 Tools for collecting Secondary Data: - Various statistical tools will also be used to
analyzing the secondary data.
1. Document Review: - Obtaining the actual forms and operating documents currently being
used. Reviews blank copies of forms and samples of actual completed forms.
2. Observation: - analyzing annual reports and press releases, verifying the statements made
during the interviews.
3. Web Search: - The information related to outside region (other part of India and Globe)
will be studied from internet to other published papers.
4. Various policies will be dealt in details by referring various government publications and
reference book, journals, published data from time to time.
3.4 Research Objective
To design a mechanism that can efficiently handle various security aspects.

To design mechanism that can detect the intrusion in the network.

To design mechanism that can handle resource constraints.

CHAPTER 4
EXPERIMENTAL SETUP AND RESULTS

Ns2 is an event driven simulator, which is a open source simulator mainly used for academic
research in the areas of Computer Networks, MANETs, WSNs. From the days of its first release
it has excited the minds of researchers, students, network practitioners opened up many
possibilities for doing simulation of different protocols before they are actually implemented in
real time.
Network Simulation is a technique where a program models the behavior of a network either by
calculating the interaction between the different network entities (hosts/routers, data links,
packets, etc) using mathematical formulas, or actually capturing and playing back observations
from a production network. When a simulation program is used in conjunction with live
applications and services in order to observe end-to-end performance to the user desktop, this
technique is also referred to as network emulation.
A network simulator is a software program that imitates the working of a computer network. In
simulators, the computer network is typically modeled with devices, traffic etc and the

performance is analyzed. Typically, users can then customize the simulator to fulfill their specific
analysis needs. Simulators typically come with support for the most popular protocols in use
today, such as IPv4, IPv6, UDP, and TCP.
Most of the commercial simulators are GUI driven, while some network simulators require input
scripts or commands (network parameters). The network parameters describe the state of the
network (node placement, existing links) and the events (data transmissions, link failures, etc).
An important output of simulations is the trace files. Trace files can document every event that
occurred in the simulation and are used for analysis. Certain simulators have added functionality
of capturing this type of data directly from a functioning production environment, at various
times of the day, week, or month, in order to reflect average, worst-case, and best-case
conditions. Network simulators can also provide other tools to facilitate visual analysis of trends
and potential trouble spots.
The notable network simulators available are ns2 and OPNET. The most popular Open-Source
Simulators available in the market are NS (also called NS-2), PDNS (Parallel/Distributed NS),
GloMoSim, SSFNet (Scalable Simulation Framework Net Models), DaSSF (Dartmouth SSF),
OMNET++ and others.
Design of NS-2:
ns is built in C++ and provides a simulation interface through OTcl, an object oriented dialect of
Tcl. The user describes a network topology by writing OTcl scripts, and then the main ns
program simulates that topology with specified parameters. The NS2 makes use of flat earth
model in which it assumes that the environment is flat without any elevations or depressions.
However the real world does have geographical features like valleys and mountains. NS2 fails to
capture this model in it.

Many researchers have proposed the additions of new models to NS2. Shadowing Model in NS2
attempts to capture the shadow effect of signals in real life, but does that inaccurately. NS2's
shadowing model does not consider correlations: a real shadowing effect has strong correlations
between two locations that are close to each other. Shadow fading should be modeled as a two
dimensional log-normal random process with exponentially decaying spatial correlations.

CHAPTER 7
FUTURE WORK AND CONCLUSION

Future Work
In future the leader election mechanism can be improved in the way of energy and edges
efficiency, where the group nodes are treated as cluster and the leader is the Cluster Head (CH)
elected by the energy value, where the maximum energy node and edges is taken as the CH and
the IDS is deployed in the CH. Where the functionality of the current work and the future work
are same and the scope of the work is improving the energy of the network and lifetime of the
network.
Conclusion
We have presented an effective method for identifying sinkhole attack in a wireless sensor
network. The algorithm consists of two steps. It first locates a list of suspected nodes by
checking data consistency, and then identifies the intruder in the list through analyzing the
network flow information. We have also presented a series enhancements to deal with
cooperative malicious nodes that attempt to hide the real intruder.

The performance of the proposed algorithm has been examined through both numerical analysis
and simulations.
The results have demonstrated the effectiveness and accuracy of the algorithm. They also suggest
that its communication and computation overheads are reasonably low for wireless sensor
networks.

REFERENCES
Grosso AG, Coccoli M, Boccalatte A. An agent programming framework based on the C#
language and the CLI. 1st Int workshop on C# and NET technologies on algorithms,
computer graphics, visualization, computer vision and distributed computing 2013.
Chakeres ID. AODV routing protocol implementation design. In: Proceedings of

distributed computing systems workshops. IEEE; 2013. p. 698e703.


Chen C, Song M, Hsieh G. Intrusion detection of sinkhole attacks in large-scale wireless
sensor networks. In: Proceedings of wireless communications, networking and

information security (WCNIS). IEEE; 2010. p. 711e6.


Culpepper BJ, Tseng HC. Sinkhole intrusion indicators in DSR MANETs. In: Proceedings
of international conference on broadband networks. IEEE; 2004. p. 681e8. Denning DE.
An intrusion detection model. Transactions on software engineering, vol. 13. IEEE;
1987222e32. Hairong Qi YX, Wang Xiaoling. Mobile-agent- based collaborative signal
and information processing in sensor networks. In: Proceedings of the IEEE 2003. p.
1172e83.

Harsh Sundani HL, Devabhaktuni Vijay, Alam Mansoor, Bhattacharya Prabir. Wireless
sensor network simulators a survey and comparisons. International Journal of Computer
Networks (IJCN) 2011;2:249e65.
Heinzelman WB. Application-specific protocol architectures for wireless networks. PhD
thesis. Massachusetts Institute of Technology; 2000. Joseph Kabara MC. MAC protocols
used by wireless sensor networks and a general method of performance evaluation.
International Journal of Distributed Sensor Networks 2012;2012. Karlof C, Wagner D.
Secure routing in sensor networks: attacks and countermeasures. In: Proceedings of the 1st
IEEE workshop on sensor network protocols and applications 2003. p. 113e27. I.
Krontiris TD, Giannetsos T, Mpasoukos M. Intrusion detection of sinkhole attacks in
wireless sensor networks. In Algorithmic aspects of wireless sensor networks, lecture
notes in computer science, vol. 4837. Springer; 2008150e61.
Min Chen TK, Yuan Yong, Leung Victor CM. Mobile agent based wireless sensor

networks. Journal of Computers 2006;1:14e21.


C E, Ngai H, Liu J, Lyu MR. An efficient intruder detection algorithm against sinkhole

attacks in wireless sensor networks. Computer Communications 2007;30:2353e64.


Perkins C. Ad hoc on demand distance vector (AODV) routing. RFC 3561. Available at,
http://www.ietf.org/rfc/rfc3561.txt; 2003. P. Samundiswary PP, Dananjayan P. Detection of
sinkhole attacks for mobile nodes in heterogeneous sensor networks with mobile sinks.
International Journal of Computer and Electrical Engineering 2010;2:127e33.
Sen JA. Survey on wireless sensor network security. International Journal of
Communication Networks and Information Security (IJCNIS) 2009;1:55e78. Fig. 29 e
Throughput in the network with 400 nodes. computers & security 37 (2013) 1 e1 4 13
Sharmila S, Umamaheswari G. Detection of sinkhole attack in wireless sensor networks
using message digest algorithms. In: Proceedings of process automation, control and
computing (PACC), IEEE. IEEE; 2011. p. 1e6.

D Sheela Nk, Mahadevan G. A non cryptographic method of sink hole attack detection in
wireless sensor networks. In: Proceedings of IEEE-international conference on recent
trends in information technology 2011. p. 527e32.
Stafrace SK, Antonopoulos N. Military tactics in agent-based sinkhole attack detection for
wireless ad hoc networks. Computer Communications 2010;33:619e38.
Tumrongwittayapak C, Varakulsiripunth R. Detecting sinkhole attacks in wireless sensor
networks. In: Proceedings of International Joint Conference. IEEE; 2009. p. 1966e71.
Wazir Zada Khan YX, Aalsalem Mohammed Y. Comprehensive study of selective
forwarding attack in wireless sensor networks. International Journal of Computer Network
and Information Security 2011;3:1e10.
Yih-Chun Hu AP, Johnson David B. Wormhole attacks in wireless networks. IEEE Journal
on Selected Areas in Communications 2006:370e80.

You might also like