You are on page 1of 50

Localization in Sensor

Networks

BY: GAURAV KHANNA


15RE91R04

Introduction

In many applications of sensor networks it is required to automatically locate


people, equipment, and other tangibles.

Various techniques have evolved over the years, such that sensor nodes can
learn their location automatically.

Since each approach solves a slightly different problem or supports different


applications, they vary in many parameters, such as the physical phenomena
used for location determination, power requirements, infrastructure versus
portable elements, and resolution in time and space.

Hence, determination of accurate location is a topic of keen interest among


various researchers.

Properties of Localization [1]

Physical position and symbolic location

Absolute versus relative

Localized location computation

Accuracy and precision

Scale

Recognition

Cost

Limitations

Localization sensing techniques

The major approaches to determine a nodes position are:

Using
information
approaches)

about

nodes

neighborhood

(proximity-based

Exploit finite range of wireless communication

E.g.: Easy to determine location in a room with infrared room number announcements

Exploiting geometric properties of a given scenario (triangulation and


trilateration)

Using elementary geometry, the distance between two nodes or the angle in a triangle
can be estimated.

When distances between entities are used, the approach is called lateration; when
angles between nodes are used, one talks about angulation.

Trying to analyze characteristic properties of the position of a node in comparison


with premeasured properties (scene analysis).

Range based localization schemes

Received signal
indication (RSSI)

strength

Time of arrival (TOA)

Time Difference of arrival


(TDOA)

Angle of arrival (AOA)

RANGE BASED DISTANCE ESTIMATION [2],


[3]
Received Signal Strength (RSS) techniques measure the power of the signal
at the receiver. Based on the known transmit power, the respective propagation
loss can be calculated. Theoretical or empirical models are used to translate
this loss into a distance estimate. This method has been used mainly for RF
signals.

Friis Free Space Equation

Note: The Friis space equation above does not consider losses

Time of arrival (ToA)/ Time difference of


arrival (TDoA) [2],[4]
In this case the distance between two nodes is
directly proportional to the time the signal takes
to
propagate from one point to another.
This way, if a signal was sent at time t1 and
reached the receiver node at time t2, the
distance between sender and receiver is d = c(t2
t1), where c is the propagation speed of the
radio signal (speed of light), and t1 and t2 are
the times when the signal was sent and
received, Fig (a).
In TDoA, nodes compute the difference in the
arrival times of the two signals.
The distance can now be computed by the
formula: d = (sr ss)*(t2 t1), where sr and ss
are the propagation speed of the radio and

Angle of arrival (AoA) [2]

The estimation of the AoA is done by using directive antennas or an


array of receivers usually three or more that are uniformly
separated.

Based on the arrival times of the signal at each of the receivers, it


becomes possible to estimate the AoA of this signal

Scene analysis

In scene analysis, pictures taken by a camera are analyzed to derive the


position from this picture.

This requires substantial computational effort and is hardly appropriate for


sensor nodes.

But apart from visual pictures, other measurable characteristic fingerprints of


a given location can be used for scene analysis, for example, radio wave
propagation patterns.

One option is to use signal strength measurements of (one or more anchors)


transmitting a known signal strength and compare the actually measured
values with those stored in a database of previously off-line measured values
for each location.

The RADAR system is one example that uses this approach to determine
positions in a building.

Multilateration

In reality, distance measurements are never


perfect and the intersection of these three circles
will, in general, not result in a single point.

To overcome these imperfections, distance


measurements from more than three anchors
can be used, resulting in a multilateration
problem.

Angulation exploits the fact that in a triangle


once the length of two sides and two angles are
known the position of the third point is known as
the intersection of the two remaining sides of the
triangle.

The problem of imprecise measurements arises


here as well and can also be solved using
multiple measurements.

Mathematical basics for lateration


[4]

Assuming distances to three points with known location are exactly


given

Solve system of equations (using Pythagoras Theorem)

(xi,yi) : coordinates of anchor point i, ri : distance to anchor i

(xu, yu) : unknown coordinates of node

Subtracting eq. 3 from 1 & 2:

Rearranging terms gives a linear equation in (xu, yu)!

Trilateration as matrix equation

Rewriting as a matrix equation:

What if only distance estimation ri = ri + i available?

Use multiple anchors, overdetermined system of equations

Use
(xu, yu) that minimize mean square error, i.e,

Given a matrix equation:

Ax = b,

The normal equation is that which minimizes the sum of the square
differences between the left and right sides:

It is called a normal equation because

Here, is a normal matrix.

b-Ax is normal to the range of A.

Single Hop localization

Active Badge

Active office

RADAR

Cricket

Few other techniques are:

Overlapping Connectivity

Approximate point in triangle

Using Angle of Arrival information

Active Badge [5]

It is the first system for the location of people in an


office environment.

Members of staff wear badges that transmit signals


providing information about their location to a
centralized location service, through a network of
sensors.

It uses diffused infrared as transmission medium


and exploits the natural limitation of infrared waves
by walls as a delimiter for its location granularity.

A badge periodically sends a globally unique


identifier via infrared to receivers, at least one of
which is installed in every room. This mapping of
identifiers to receivers (and hence rooms) is stored
on a central server, which can be queried for the
location of a given badge.

Active office [6]

It is a system that can determine the location and orientation of objects


within a building.

The information provided by the system is sufficiently fine-grained to


allow investigation of a new set of context aware applications.

Here, ultrasound is used, with receivers placed at well-known position,


mounted in array at the ceiling of a room; devices for which the
position is to be determined act as ultrasound senders.

Furthermore, the wireless, low-powered nature of the location sensors


allows them to be integrated into an everyday working environment
with relative ease.

RADAR [7]

The RADAR system is also geared toward indoor computation of


position estimates.

Its most interesting aspect is its usage of scene analysis techniques,


comparing the received signal characteristics from multiple anchors
with premeasured and stored characteristic values.

Both the anchors and the mobile device can be used to send the signal,
which is then measured by the counterpart device(s).

While this is an intriguing technique, the necessary off-line deployment


phase for measuring the signal landscape cannot always be
accommodated in practical systems.

Cricket [8]

In the Active Badge and active office systems described above, the
infrastructure determines the device positions.

Sometimes, it is more convenient if the devices themselves can


compute their own positions or locations for example, when privacy
issues become relevant.

Therefore, Cricket uses a combination of RF and ultrasound hardware to


enable a listener to determine the distance to beacons, from which the
closest beacon can be more unambiguously inferred.

Positioning in multi-hop environment


[9]

How to estimate range to a node to which no direct radio


communication exists?

No RSSI, TDoA,

But: Multihop communication is possible

Idea 1: Count number of hops, assume length of one hop is known (DVHop)

Idea 2: If range estimates between neighbors exist, use them to


improve total length of route estimation in previous method (DVDistance)

Iterative multilateration

Assume some nodes can hear at


least three anchors (to perform
triangulation), but not all

Idea: let more and more nodes


compute
position
estimates,
spread position knowledge in the
network

Problem: Errors accumulate

Probabilistic position description

Similar idea to previous one, but, here position of nodes is only


probabilistically known

Represent this probability explicitly, use it to compute probabilities for further


nodes

Centroid Algorithm (Range-Free)[10]

The implementation of centroid algorithm contains


below steps.

All anchor nodes broadcast their location information and


identity to all sensor nodes in their transmission range.
All nodes listens the signal for a fixed time t and collect
the location information from various anchor nodes.

All un-localized nodes determine their position by


forming a polygon shown in figure and calculate the
centroid from all positions of anchor nodes in their range
by using the below formula:

Xest = (X1+ X2..+ Xn) /n

Yest = (Y1+ Y2..+ Yn) /n

Where (X1, Y1) (Xn, Yn) are the anchor nodes


coordinates
and
(Xest,
Yest)
is
estimated
coordinates of the node.

Major drawback is it produces large localization error.

Relative Neighborhood Graph[11]

In many problems one is given a set of points on the plane and it is desired to
find some structure among the points in the form of edges connecting a
subset of the pairs of points.

Edge between nodes u and v if and only if there is no other node w that is
closer to either u or v

Formally:

RNG maintains connectivity of the original graph

Easy to compute locally

But: Worst-case
spanning ratio is (|V|)

Average degree is 2.6

This region has to be


empty for the two
nodes
to
be
connected

Voronoi Diagrams/ Delaunay Triangulation

Voronoi diagram: Assign, to each


node, all points in the plane for which
it is the closest node

Constructed in O(|V| log |V|) time

Delaunay triangulation: Connect


any two nodes for which the Voronoi
regions touch

Problem: Might produce very long


links; not well suited for power control

Edges of Delaunay triangulation

Gabriel Graph

Gabriel graph (GG) similar to RNG

Difference: Smallest circle with nodes u and


v on its circumference must only contain
node u and v for u and v to be connected

Formally:

Properties: Maintains connectivity, Worstcase spanning ratio (|V|1/2), energy stretch


O(1) (depending on consumption model!),
worst-case degree (|V|)

This region has to


be empty for the
two nodes to be
connected

Two algorithms for finding the RNG

Given: n points, their Cartesian coordinates p1(x1, y1), p2(x2, y2).... Pn


(xn,yn)

Algo. 1: (1) Compute the distance between all pairs of points d(pi , pj),
i, j
= 1, 2, ... n, i j.
(2) For each pair of points (pi , pj) compute = max {d(pk, pi),
d(pk, pj)} for k = 1, 2, ... n, k i, k j.
(3) For each pair of points (pi, pj) search for a value of that is
smaller than d(pi, pj). If such a point is not found, an edge is
formed between pi and pj.

Algo. 2: (1) Compute the Voronoi diagram of the set of points.


(2) Obtain Delaunay triangulation from the Voronoi diagram.

the

(3) For each pair of points (pi, pj), associated with an edge of
DT, compute = max {d(pk, pi), d(pk, pj)} for k = 1, 2, ... n,
k i, k j.
(4) Same as step 3 of algorithm 1, with edges of DT only.

Mobile target tracking in WSNs

The RNG tessellates the plane into polygonal shapes called as faces.

If every node is aware of its own location and of all its neighbors in the
faces by using global positioning system (GPS) or other techniques
(already discussed), then, this can be exploited in tracking mobile
targets.

Typical examples include establishing survivable military surveillance


systems, environmental and industrial monitoring, personnel and
wildlife monitoring systems requiring tracking schemes, capable of
deducing kinematic characteristics such as position, velocity, and
acceleration of single or multiple targets of interest.

Target Tracking with Monitor and Backup


Sensors in Wireless Sensor Networks
(TTMB)[12]

If the number of active sensors is large, the tracking accuracy can be


high; however, with high energy consumption.

So, for tracking a mobile target, the idea is to use as minimum no. of
sensors as possible.

TTMB is a novel lightweight approach to implement target tracking in


WSNs combining Geographic routing and prediction methods.

This protocol relies on accumulated information from a small number of


sensor nodes. It has low complexity prediction based cooperative
tracking that compares the data received from different nodes.

TTMB Introduction

An entity that intends to track a target is called a tracker.

A tracker is assumed to be a single generic source such as a mobile user


or a respective authority.

A target can be any mobile entity such as enemy vehicle or an intruder.

Each sensor in the network has the capability of sensing, communicating,


and computing.

One of the active and working sensors is elected as a monitor, and


another one is elected as a backup for fault tolerance concern.

The tracker queries the sensor network to follow a target, monitor works
on the request of tracker.

All sensors can be in three states: awake, active or inactive.

Working (Outline) of TTMB protocol

Planarization model

A sensor network can be modeled as a graph G = (V,E) by utilizing


two well-known distributed planarization algorithms, Gabriel graph
(GG) and relative neighborhood graph (RNG)(already discussed).

In the given Figure note that node v1 corresponds to 3 adjacent


faces, namely, F1, F2, and F18.

Suppose a target is presently in F2 and v1 is a monitor node, then


F1 and F18 are called neighbor faces.

So v1 stores information about 3 faces that are adjacent to it in the


planar subgraph - (v1, v3, v4, v5), (v1, v5, v6, v7, v2) and (v1, v2,
v3).

Node v1 has only 3 neighbor nodes v2, v3, and v5, but with respect
to the target position, v1 has 2 neighbor nodes, v5 and v2 in F2,
called immediate neighbors.

While the rest of v1s neighbor nodes, v6 and v7 in F2 are called


distant neighbors.

State transition and Energy


consumption model

Assume a tracking event is captured by a sensor node at


some time t0; processing is finished at time t1; and the
next tracking event occurs at time t2 = t1 + ti.

According to the state transition diagram shown in Figure,


each state Sk has a power consumption Pk, and the
transition time to a state and return from the state is
given by d,k and u,k, respectively.

Typically, in different node states, Pj > Pi, td,i > td,j and
tu,i > tu,j for any i > j, and P = P0 Pk and P = P0+Pk.

When the node changes state from s0 to, say, sk. The
energy savings, Es,k, because of state transition given by
the difference in the face and sleep thresholds Tth,k
corresponding to the states sk are computed as follows:

Mobile target positioning and


movement

The monitor can find out a targets position,


velocity and direction.

Given the targets present location in oLi(xi,


yi) at time ti and (xi1, yi1) in previous
location oLi1 at time ti1, then we can
estimate the targets speed v and the
direction as:

Using these values, the predicted location for the


target (xi+1, yi+1) after a given time t is given by:

Interaction between Tracker (T), target (O),


monitors and backups

When a new monitor fails to detect or is not close


to o, the backup takes up the role of the monitor.

The relationship between the monitor, the new


monitor, and the backup is maintained through a
low cost implicit linked list among them, as shown
in Figure.

When o moves across the sensing field, the


monitor can construct a linked list automatically,
there is a linear link between the monitor v4 and
the new monitor v1, and another link between v4
and v5, also there is link between v1 and its
backup v5.

Local area prediction-based mobile target


tracking in wireless sensor networks (tTracking)[13]

T-Tracking: A modification in TTMB by target tracking using face prediction,


instead of target location prediction in faces.

This achieves two major objectives: high QoT and high energy efficiency of
the WSN.

Basic terms like: target, tracker etc. remains unchanged here, also
assumptions are similar about RNG and query etc.

This protocol consists of seven algorithms, which are used for face prediction
for localization of target. The outputs from one algo. Serves as some input
for the other.

Here also, all nodes are aware of their geographic location, and a RNG is
constructed at the initial step.

Rules for node organization into


faces

As
T wishes, it issues a message to the WSN to assist to track t.

Suppose that vi is any node of the WSN that can detect t as t appears in the vicinity
of vi, e.g., v1. Based on a detection probability (Pd), v1 becomes the monitor.

Then, v1 starts finding face Fi on which t appears. v1 further updates the information
about faces. It corresponds to three adjacent faces, namely, F1, F2, and F18.

F1 and F18 are called neighboring faces. The nodes in the three adjacent faces in
G are(v1, v3, v4, v5), (v1, v5, v6, v7, v2), and (v1, v2, v3).

v1 has only three direct neighboring nodes v2, v3, and v5, but here we only consider
the neighboring nodes with respect to ts location in F2. Thus, the nodes v5, v6, v7,
and v2 in F2, are called v1s face neighbors.

v5 and v2 in F2, are called v1s immediate neighbors. The rest of v1s neighboring
nodes, v6 and v7, are called distant neighbors.

One immediate neighbor (e.g., v2) becomes the backup, as the combined detection
probability, denoted by , between the monitor and the immediate neighbor is the
best.

Target detection inside a face (Algo.


1)

Each node estimates a


detection probability Pd,
which is the probability
that a node reports the
presence of t when t is
within its Rs.

Algorithm 1. Target Detection

Input: A WSN of N sensor nodes observing the


target t;

Output: ts location li at time h;

for each node vi of the WSN at the s1 state do

Listen to the environment and start sensing;

Here, Signal strength


modelled using:

Measure Si;

if t is found then

Change the status to s0 state;

Compute Pd;

Run ts moving face detection algorithm; //i.e.,


Algorithm 2

Compute current location li; // After Algorithm 2 runs

end for

is

Targets moving face detection (Algo.


2)

Each
node that already detected t makes a decision on t: in which specific face t is currently

moving and then localize t inside the face for the first time face detection.

Algorithm 2:

Step 1. Node vi, e.g., v1, detects t by using Pd at time instant h somewhere in the WSN.
Similarly, some neighboring nodes, e.g., v5, v3, v4, v6, and so on, of v1 might be able to detect t
at h and have Pd to some extents.

Step 2. v1 first interacts with its adjacent neighbors by issuing a request, containing the
information that t is in the range. The information includes Pd and d(,). There are three adjacent
neighbors of v1: v5, v2, and v3.

Step 3. After receiving the request messages from all of its neighbors (including the adjacent
ones), node vi compares its Pd with another node vj that are paired up with it, e.g., v1<->v2;
v1<->v3; v1<->v5.

Step 4. Among all the neighbors, v5 or v2 has the second best detection probability that are the
immediate neighbors. v3 may have the lower detection probability than that of v5 or v2. Thus, t
should be inside F2, instead of in F18 or in F11. To know the pair of the nodes, which have the
best detection probability, we consider a combine detection probability () of each pair.

Targets moving face detection (Algo.


2)

Finally, three conditions for any pair of nodes such as v1 and v5 are set, to
be the monitor and backup: (i) should be higher than other pairs of nodes;
(ii) they should be the adjacent and also be the immediate neighbors; (iii)
they should be in the same face Fi, e.g., F2.

Step 5. As the monitor and backup, v1 and v5 update the information of F2


and neighboring faces by following the rules described earlier. Then, the
complete face Fi is detected and nodes of the faces are organized to track t.

Note that the above steps are used for face Fi detection at the first time. ts
tracking can be easier in the WSN afterward, as another two nodes of F2
(e.g., v1 and v5) compute ts movements and face prediction when t moves
from Fi to Fj.

Targets moving face detection (Algo. 2)

Computing ts moving sequence inside


a face (Algo. 3)

t may move in complex and stochastic ways in any


direction from face Fi to a future face Fj, as shown in
Fig.(a).

ts velocity is unpredictable and it may be impossible


to explicitly express the velocity. When t moves
inside Fi and moves toward Fj, as shown in Fig. (b)

Fig.(a)

Fig.(b)

Face prediction (Algo. 4)

Here,
based
on
the
movement
sequence, the monitor and backup
compute a probability of direction,
denoted by p, where p is given by:

Tracking process through face


prediction (Algo. 5 & 6)

Algorithm 6 provides the pseudocode of


the interactions between the monitor
and T for ts tracking.

Robustness to special events in


tracking (Algo. 7)

References
1.

J. Hightower and G. Borriello. A Survey and Taxonomy of Location Systems for Ubiquitous
Computing. Technical Report UW-CSE 01-08-03, University of Washington, Computer
Science and Engineering, Seattle, WA, August 2001.

2.

A. Boukerche, et al. "Localization systems for wireless sensor networks."wireless


Communications, IEEE14.6 (2007): 6-12.

3.

G. Mao, B. Fidan, & B. D. Anderson, Wireless sensor


techniques,Computer networks,51(10): 2529-2553, 2007.

4.

A. Savvides, C. C. Han, and M. Srivastava. Dynamic Fine-Grained Localization in Ad-Hoc


Networks of Sensors. Proceedings of the 7th Annual International Conference on Mobile
Computing and Networking, pages 166179. ACM press, Rome, Italy, July 2001.

5.

A. Harter and A. Hopper. A Distributed Location System for the Active Office. IEEE
Network, 8(1): 6270, January 1994.

6.

A. Ward, A. Jones, and A. Hopper. A New Location Technique for the Active Office. IEEE
Personal Communications, 4(5): 4247, 1997.

7.

P. Bahl and V. N. Padmanabhan. RADAR: An In-Building RF-Based User Location and


Tracking System. In Proceedings of the IEEE INFOCOM, pages 775784, Tel-Aviv, Israel,
April 2000.

network

localization

References
8.

N. B. Priyantha, A. Chakraborty, and H. Balakrishnan. The Cricket Location-Support


System. In Proceedings of the 6th International Conference on Mobile Computing and
Networking (ACM Mobicom), Boston, MA, 2000.

9.

D. Niculescu and B. Nath. Ad Hoc Positioning System (APS). In Proceedings of IEEE


GlobeCom, San Antonio, AZ, November 2001.

10.

N. Bulusu, J. Heidemann, and D. Estrin, GPS-Less Low Cost Outdoor Localization For
Very Small Devices, IEEE Personal Communications Magazine, 7(5): 2834, 2000.

11.

G. Toussaint, The relative neighborhood graph of finite planarset, Pattern Recognit.,


vol. 12, no. 4, pp. 261268, 1980.

12.

M. Z. A. Bhuiyan, G. Wang, and J. Wu, Target tracking with monitor and backup
sensors in wireless sensor networks, in Proc. IEEE 18th Int. Conf. Comput. Commun.
Netw., pp. 16, 2009.

13.

M. Z. A. Bhuiyan, G. Wang, A.V. Vasilakos, "Local Area Prediction-Based Mobile Target


Tracking in Wireless Sensor Networks," inComputers, IEEE Transactions on, vol.64,
no.7, pp.1968-1982, July 1 2015.

ANY QUERIES?

THANK YOU!

You might also like