You are on page 1of 59

Remote Sensing

An Introduction
By
Dean Vestal
Part 1
What is Remote Sensing
• "Remote sensing is the science and art of obtaining information
about an objects, area or phenomena through the analysis of data
acquired by a device that is not in contact with the object, area or
phenomenon under investigation" (Lillesand and Kiefer, 2000).
• The recording devices used in remote sensing include cameras,
digital cameras, spectral scanners, radiometers, lasers, radio
frequency receivers, seismographs, gravimeters, magentometers,
and scintillation counters.
• These instruments are designed to detect and record reflected solar
radiation, emitted terrestrial radiation, or other forms of energy (e.g.
radar, or LIDAR). The form and amount of this energy depend on
the physical, chemical or biological state of the object of interest.
• Examples of remote sensing data include aerial photography,
satellite imagery, radar or lidar data.
Electromagnetic Radiation
• Electromagnetic waves are energy transported through space in the
form of periodic disturbances of electric and magnetic fields. All
electromagnetic waves travel through space at the same speed, c =
2.99792458 x 108 m/s, the speed of light. An electromagnetic wave is
characterized by a frequency and a wavelength.
• The frequency and the wavelength of an electromagnetic wave
depends on its source. There is a wide range of frequencies
encountered in our physical world, ranging from low frequencies of the
electric waves generated by power transmission lines to very high
frequencies of gamma rays originating from atomic nuclei. This wide
frequency range of electromagnetic waves constitute the
Electromagnetic Spectrum.
Electromagnetic Spectrum

The electromagnetic spectrum can be divided into several wavelength (frequency)


regions, among which only a narrow band from about 400 to 700 nm is visible to the
human eyes. The spectrum is continuous with no sharp boundary between
wavelengths.
RS Purpose
• The output of a remote sensing system is some type of
interpretable display product, be it an image or a map or
a numerical data set, that mirrors the reality of the
surface, near surface or atmosphere present in the field
of view.
• A further step of image analysis and interpretation is
required in order to extract useful information from the
image.
• Information interpreted from images are widely used for
GIS analysis
– Many GIS platforms now contain algorithms for image analysis
and classification.
• Remote sensing images also serve as GIS base maps
which can be easily acquired and updated
Three different remote sensing
platforms:
1. Aircraft Based
2. Space Shuttle Based
3. Satellite
Based

solar-synchronous
orbit, altitudes of
600-900 km
Passive Remote Sensing

Pre-Process and Archive


Downlink

Distribute for Analysis


• Passive Remote Sensing: Optical
Reflect the sun’s energy: visible and reflected infrared ~
Absorbed, then re-emitted: thermal infrared wavelengths

Feature:
Reflected image only taken
during day time (no clouds).

Thermal image can be


obtained day or night.
Multi-Spectral Sensor
• LANDSAT MSS/TM/ETM+ (NASA, USA)
• SPOT-1, -2, -3 (France)
• JERS-1 (optical sensor) (Japan)
• MODIS (NASA, USA)
• ASTER (NASA, USA, and
Japan)
• IRS-1A, -1B, -1C, 1D (India)
• IKONOS (Space Imaging,
USA)
Hyper-Spectral Sensor
z AVIRIS (NASA, USA)
z HyMap (Australia)
Active Remote Sensing
• Active remote sensing: Radar
RAdio Detection And Ranging

Feature:
Can acquire images
anytime, regardless of
the time of day or
weather.
Several meters of
penetration ability.
Radar Imagery
Interferometry for Elevation Data
How Interferometry Works
• Each pixel of a radar image contains
information on the phase of the signal back
scattered from the target surface. By
utilizing the geometry provided by two
marginally displaced, coherent
observations of the surface, phase
difference between the two observations,
can be related directly to the altitude of the
antenna above the ground on a pixel by
pixel basis. (The resulting phase difference
image is known as an Interferogramme.)
– Radar Antennas (A1 & A2) are some h
height above a reference surface.
– The distance between A1 & A2 is B.
– The distance between A1 and the point
on the ground being imaged is the
range
– The distance between A2 and the point
on the ground being imaged is the
range
– is the angle between A1 & A2
– is the radar pulse angle
• Active remote sensing: Lidar
LIght Detection And
Ranging,
Also Called Laser Radar,

Applications:
DEMs, Measuring Ozone,
Detecting clouds and
aerosols, Monitoring air
pollution.

Electromagntic
Spectrum
Radar Sensor
• SIR-A, -B, -C (NASA, USA)
• RADARSAT (Canada)
• JERS-1 (radar sensor) (Japan)
• ERS-1 (European)
• AIRSAR/TOPSAR (NASA, USA)

Lidar Sensor
z ALTMS (TerraPoint, USA)
z FLI-MAP (John Chance, USA)
z ALTM (USA)
z TopoEye (USA)
z ATLAS (USA)
Part 2
Analog and Digital Images
• The images collected by remote
sensing may be analog or digital. Aerial
photographs are examples of analog
images while satellite images acquired
using electronic sensors are examples
of digital images.
• A digital image is a two-dimensional
array of pixels saved in a raster format.
Each pixel has an intensity value
(represented by a digital number) and a
location address (referenced by its row
and column numbers).
• The intensity value represents the
measured physical quantity such as the
solar radiance reflected from the
ground. This value is normally the
average value for the whole area
covered by the pixel.
Multilayer Image
• The SPOT HRV sensor operating in the
multispectral mode detects radiations in three
wavelength bands: green (500 - 590 nm), red
(610 - 680 nm) and near infrared (790 - 890
nm). A single SPOT multispectral scene
consists of three raster images representing the
three wavelength bands. Each pixel of the
scene has three intensity values corresponding
to the three bands.
• By "stacking" these images from the same area
together, a multilayer image is formed.
• Multilayer images can also be formed by
combining images obtained from different
sensors, and other subsidiary data. For
example, a multilayer image may consist of
three layers from a SPOT multispectral image, a
layer of ERS synthetic aperture radar image,
and perhaps a layer consisting of the digital
elevation map of the area being studied.
Spatial Resolution
• Spatial resolution refers to the
measure of the smallest object
that can be resolved by the
sensor, or the size of the area
on the ground represented by
each pixel. Spatial resolution is
usually expressed as Ground-
projected Instantaneous Field
of View (GIFOV). A "High
Resolution" image refers to
one with a small resolution
size. Fine details can be seen
in a high resolution image. On
the other hand, a "Low
Resolution" image is one with
a large resolution size.
Spectral Resolution
Spectral resolution is determined by the number and width of spectral intervals
(bands) that a given sensor is capable of recording. The larger number and
narrower spectral bands, the higher the spectral resolution of the data. Based on
spectral resolution we typically divide remote sensing data according to increasing
spectral resolution into panchromatic, multispectral and hyperspectral data.

TM, 7 bands, 60-270nm AVIRIS, 224 bands, 10 nm


• Panchromatic:

• A single, broad band image


similar to a black and white
photograph. Typically, a
panchromatic sensor can
record energy from the Visible,
to Near-IR in one very wide
spectral band. (Image: Landsat
7 panchromatic band, spectral
range - 0.52 to 0.90 microns,
resolution 15m)
• Multispectral:

• The data contains from two to typically no more than 15 spectral bands in
the range from the Visible, through Near-IR, Mid-IR to Thermal-IR range of
electromagnetic spectrum. By combining various bands, we can obtain
unique representations of the study areas for easier qualitative and
quantitative interpretation (for example multispectral classification).
Example of spectral bands recorded by Landsat 7 ETM sensor:

• Band 1 ---- 0.45 to 0.515


• Band 2 ---- 0.525 to 0.605
• Band 3 ---- 0.63 to .690
• Band 4 ---- 0.75 to 0.90
• Band 5 ---- 1.55 to 1.75
• Band 6 ---- 10.40 to 12.5
• Band 7 ---- 2.09 to 2.35
Landsat TM Band Ratios
• Different multispectral band
combinations can aid in
landscape evaluation and
classification

• Band ratios 3 2 1 (RGB)


psuedo true color - good for
general viewing
• 4 3 2 (RGB) - same as a color
infrared photo. Good for water
boundaries and chlorophyll
reflectance
• 4 5 3 (RGB) - very good for
water delineation and good for
veg type AND condition
Multispectral Example NDVI
• NDVI (Normalized Difference Vegetation Index) is a formula
that utilized two satellite channels. If one band is in the
visible red region (for example Landsat band 3) and one is
in the near infrared (for example landsat band 4), then the
NDVI is (NIR - RED) / (NIR + RED).
• NDVI values vary with absorption of red light by plant
chlorophyll and the reflection of infrared radiation by water-
filled leaf cells.
• NDVI provides a basic estimate of vegetation health and a
means of monitoring changes in vegetation over time. The
possible range of values is between -1 and 1, but the typical
range is between about -0.1 (NIR less than VIS for a not
very green area) to 0.6 (for a very green area).
NDVI Result

NDVI derived from Landsat 5 TM Sensor Color infrared orthophoto


• Hyperspectral:

• The data is recorded in multiple (from tens to hundreds) very narrow


bands. The typical bands width is less than 10 microns. The bands are
recorded as contiguous spectrum through the Visible, Near-IR, Mid- and
Thermal-IR. The fine spectral resolution of hyperspectral data enables
discrimination of small differences in ground features of interest. The
required specialized processing techniques, extensive storage, and
considerable computer processing power is the cost of high spectral
resolution of hyperspectral data.
Multispectral vs Hyperspectral
Reflectance spectra of three Reflectance spectra of three materials
materials in as they would in as they would appear to the
appear to hyperspectral AVIRIS sensor. The
the multispectral Landsat 7 ETM gaps in the spectrum are wavelength
sensor. ranges at which the atmosphere
absorbs so much light that no reliable
signal is received from the
surface.
Application of Hyperspectral
Image Analysis

• Hyperspectral imagery has been used to detect and map a wide


variety of materials having characteristic reflectance spectra. For
example, hyperspectral images have been used by geologists for
mineral mapping
• To detect soil properties including moisture, organic content, and
salinity
• Vegetation scientists have successfully used hyperspectral imagery
to identify vegetation species,study plant canopy chemistry,and
detect vegetation stress
• Military personnel have used hyperspectral imagery to detect
military vehicles under partial vegetation canopy, and many other
military target detection objectives.
Radiometric Resolution
• Radiometric Resolution refers to the smallest change in intensity level that
can be detected by the sensing system. In a digital image, the radiometric
resolution is limited by the number of discrete quantization levels used to
digitize the continuous intensity value.
• The subsequent images show the effects of degrading the radiometric
resolution by using fewer quantization levels.

8-bit quantization 6-bit quantization 4-bit quantization


(256 levels) (64 levels) (16 levels)

3-bit quantization 2-bit quantization 1-bit quantization


(8 levels) (4 levels) (2 levels)
Temporal Resolution
• The temporal resolution of a sensor determines how often a given
place on Earth is revisited. The temporal resolution of a spaceborne
sensor is most often fixed and depends on orbital parameters of a
space vehicle. Some modern sensors can be also pointed (both
sideways or fore and aft) considerably improving revisit time. Most
Earth observation sensors are sun-synchronous, which means they
revisit the same location on Earth at the same local time. The
temporal resolution of airborne devices is much easier to control and
mainly depends on weather conditions. The revisit time of remote
sensors is extremaly important for time critical environmental
monitoring, disaster management and many agriculture related
applications
• Examples of different temporal resolutions:
SPOT - 26 days (1, 4-5 days with pointing)
Landsat - 16 days
MODIS - 2 days
GOES - 30 minutes
Swath Width

•X, Y axis - Swath Width (km/miles)

•Resolutions: Spatial (meters) and


Temporal or Return Time (R) (days-
nadir/offnadir)

•VNIR - Visible and NIR


•MIR - Mid IR
•P - Panchromatic
•TH -Thermal

•Swath width is most often is


inversely related to spatial
resolution of the data.
Data Volume and Resolution
• The volume of the digital data can potentially be large for
multispectral and hyperspectral data, as a given area is
covered in many different wavelength bands.
– For example, a 3-band multispectral SPOT image covers an
area of about 60 x 60 km2 on the ground with a pixel separation
of 20 m. So there are about 3000 x 3000 pixels per image. Each
pixel intensity in each band is coded using an 8-bit (i.e. 1 byte)
digital number, giving a total of about 27 million bytes per image.
• In comparison, the panchromatic data has only one
band. Thus, panchromatic systems are normally
designed to give a higher spatial resolution than the
multispectral system.
– For example, a SPOT panchromatic scene has the same
coverage of about 60 x 60 km2 but the pixel size is 10 m, giving
about 6000 x 6000 pixels and a total of about 36 million bytes
per image. If a multispectral SPOT scene is digitized also at 10
m pixel size, the data volume will be 108 million bytes.
Part 3
Image Processing & Analysis
• Many image processing and
analysis techniques have been
developed to aid the
interpretation of remote
sensing images and to extract
as much information as
possible from the images. The
choice of specific techniques
or algorithms to use depends
on the goals of each individual
project. In this section, we will
examine some procedures
commonly used in
analysing/interpreting remote
sensing images.
Pre-Processing
• Prior to data analysis, initial processing on the raw data is usually
carried out to correct for any distortion due to the characteristics of
the imaging system and imaging conditions. Depending on the
user's requirement, some standard correction procedures may be
carried out by the ground station operators before the data is
delivered to the end-user.
• These procedures include radiometric correction to correct for
uneven sensor response over the whole image and geometric
correction to correct for geometric distortion due to Earth's rotation
and other imaging conditions (such as oblique viewing).
• The image may also be transformed (reprojected) to conform to a
specific map projection system.
• Furthermore, if accurate geographical location of an area on the
image needs to be known, ground control points (GCP's) are used
to register the image to a precise map (geo-referencing).
Image Enhancement
• In order to aid visual interpretation, visual appearance of the objects in the
image can be improved by image enhancement techniques such as grey
level stretching to improve the contrast and spatial filtering for enhancing
the edges.
• A bluish tint can be seen all-over the image, producing a hazy appearance.
This hazy appearance is due to scattering of sunlight by atmosphere into
the field of view of the sensor. This effect also degrades the contrast
between different land cover areas.
Histograms
• It is useful to examine the image Histograms before performing any image
enhancement. The x-axis of the histogram is the range of the available digital
numbers, i.e. 0 to 255. The y-axis is the number of pixels in the image having a
given digital number.
• The minimum digital number for each band is not zero. Each histogram is shifted
to the right by a certain amount. This shift is due to the atmospheric scattering
component adding to the actual radiation reflected from the ground. The shift is
particularly large for the XS1 band compared to the other two bands due to the
higher contribution from Rayleigh scattering for the shorter wavelength.
• The maximum digital number of each band is also not 255. The sensor's gain
factor has been adjusted to anticipate any possibility of encountering a very
bright object. Hence, most of the pixels in the image have digital numbers well
below the maximum value of 255.
Rayleigh Scattering
• Rayleigh scattering refers to the scattering of
light off of the molecules of the air, and can be
extended to scattering from particles up to about
a tenth of the wavelength of the light.
• This scattering, is more effective at short
wavelengths (the blue end of the visible
spectrum). Therefore the light scattered down to
the earth at a large angle with respect to the
direction of the sun's light is predominantly in the
blue end of the spectrum. This give us the blue
sky.
Linear Grey-Level Stretching
• The image can be enhanced by a simple
linear grey-level stretching. In this
method, a level threshold value is
chosen so that all pixel values below this
threshold are mapped to zero. An upper
threshold value is also chosen so that all
pixel values above this threshold are
mapped to 255. All other pixel values are
linearly interpolated to lie between 0 and
255. The lower and upper thresholds are
usually chosen to be values close to the
minimum and maximum pixel values of
the image. The Grey-Level
Transformation Table is shown in the
following graph.
Result of LGL Enhancement
The result of applying the linear stretch is shown in the following
images (before and after). Note that the hazy appearance has
generally been removed, except for some parts near to the top of the
image. The contrast between different features has also been
improved.
Image Classification
• Different land cover types in an image can be
discriminated using image classification algorithms on
spectral features, i.e. the brightness and "color"
information contained in each pixel. Two types of
classification procedures are "supervised" or
"unsupervised“ classifications.
• Each class of landcover is referred to as a "theme" and
the product of classification is known as a "thematic
map".
• The accuracy of the thematic map derived from remote
sensing images should be verified by field observation,
and/or other ancillary data such as aerial photographs.
Unsupervised Classification
•In unsupervised classification, the computer program automatically
groups the pixels in the image into separate clusters, depending on
their spectral features. Each cluster will then be assigned a land cover
type by the analyst.
•The following image shows an example of a thematic map. The image
was derived from a multispectral SPOT image using an unsupervised
classification algorithm.

Class No.
(Color in Land Cover Type
Map)
1 (black) Clear water
Dense Forest with closed
2 (green)
canopy
3 (yellow) Shrubs, Less dense forest
4 (orange) Grass
5 (cyan) Bare soil, built-up areas
Turbid water, bare soil, built-up
6 (blue)
areas
7 (red) bare soil, built-up areas
8 (white) bare soil, built-up areas
Supervised Classification
• In supervised classification, the spectral features of areas of known land
cover types are selected from the image. These areas are known as the
"training areas". Every pixel in the whole image is then classified as
belonging to one of the classes depending on how close its spectral features
are to the spectral features of the training areas.

Class No. Land Cover Type


1 Deciduous trees
2 Exposed dark soil
3 Exposed bright soil
4 Grass covered soil
Tilled soil with
5
vegetation
6 Road
Geomatics
• Geomatics, as a term, evolved mostly by renaming what
was previously called "geodesy" or "surveying", and by
combining a number of computer science- and/ or GIS-
oriented technologies. Geomatics is the science of
acquisition, management, modeling, analysis and
representation of spatial data and processes under
specific consideration of problems related to spatial
planning, land use/ land development and environmental
issues.
• Geomatics bridges a wide arc from the geosciences
through various engineering sciences and computer
science to spatial planning, land development and the
environmental sciences.
Aerial Photography
•Easy to acquire
•Relatively Inexpensive
•High Resolution
Stereoscopy

• Stereoscopy is the science and art that deals


with the use of binocular vision for the
observation of overlapping photographs or other
perspective views and the method by which
such views are produced. Essentially most of us
with normal eyesight have stereoscopic vision
(i.e. The ability to see and appreciate depth of
field through the perception of parallax.)
Photogrammetry
• The science of making reliable
measurements by the use of photographs
and especially aerial photographs.
Orthorectification
• Orthorectification is a process to correct the data planimetrically
such that the final result (orthoimage) can be used as an image-
based map. The orthoimage can be used in many geomatic fields
such as the integration with GIS.
• 3 types of distortions
– Radial Lens Distortion
– Terrain Distortion
– Camera Displacement
Planimetric View
Radial Distortion:
Produced by the
curvature of the camera
lens and imperfections in
its shape. Gives a ‘fish
eye’ effect
Terrain Distortion:
Induced by varying
elevation.
Displacement: occurs
when the camera is not
pointing straight down.
• Ground Control Points (GCP’s) are collected to register
the image.
– Obtained from other orthorectified images (DOQ’s), GPS
points mapped in the field, topographic maps, etc.
• Mathematical models applied to rectify the image.
Modeling Methods
• The polynomial method is a very simple but outdated method for
correcting images. It does not correct distortions introduced during
the image acquisition and does not take into account terrain relief
distortions. The polynomial method also requires many ground
control points (GCPs). It is not recommended for most applications
in remote sensing and GIS.
• A parametric model is a mathematical representation of the physical
law of the transformation between the image and ground spaces. It
corrects the entire image globally and also takes into consideration
the distortions due to terrain. It is the recommended method for
achieving the best results.
RS/GIS Software in ER&P Lab
• Erdas Imagine
• PCI Geomatica
• Idrisi 32
• Spans
• Arc View 3.2
• Arc GIS 8.1
References
• Lillesand, T. M. and R. W. Kiefer. 2000. Remote Sensing and Image Interpretation. 4th
edition. John Wiley & Sons, Inc., New York.
• ERDAS Field Guide (4th edition). 2000. ERDAS, Inc., Atlanta, Georgia. ERDAS Field Guide
• Remote Sensing Data and Information http://rsd.gsfc.nasa.gov/rsd/RemoteSensing.html
• Jet Propulsion Lab http://www.jpl.nasa.gov/
• The Landsat Program http://geo.arc.nasa.gov/sge/landsat/landsat.html
• Digital Globe (QuickBird) https://www.digitalglobe.com/
• Spot Image Corporation http://www.spot.com/
• Space Imaging (IKONOS) http://www.spaceimaging.com/
• The Virtual Science Center http://www.sci-ctr.edu.sg/ssc/publication/remotesense/rms1.htm
• THE USE OF SATELLITE REMOTE SENSING http://www.ciesin.org/TG/RS/RS-home.html
• CRSSA Syllabus http://www.crssa.rutgers.edu/courses/remsens/
• Remote Sensing and Image Interpretation &
Analysis http://mercator.upc.es/tutorial/table.html
• The Remote Sensing Core Curriculum http://research.umbc.edu/~tbenja1/index.html
• Environmental Studies of the World Trade Center area after the September 11, 2001
attack http://greenwood.cr.usgs.gov/pub/open-file-reports/ofr-01-0429/

You might also like