You are on page 1of 73

Digital Image Processing

- Image
- Digital Image
Image:
- An image is a two-dimensional picture,
that has a similar appearance to some subject;
- A physical object or a person
- Two-dimensional images:
- A photograph
- A screen display
- A drawing/ painting
- Image can also be a three-dimensional, such as:
- A hologram
- Stereoscopic 3-D image
- 3D movie

Capturing Images :
- Optical devices:
- Cameras
- Lenses and mirrors
-Telescopes and Microscopes etc
- Natural objects and phenomena:
- The human eye or
- Water surfaces, Rain droplets

- The word image is also used in the broader sense of


two-dimensional figures such as:
- Maps
- Graphs
- Pie charts
- Abstract Paintings

Images can be still images or a movie


( Collection of still images taken at a fixed time interval of
more than 10 frames per second)

Digital Images
- Generally images are analog (Taken by a roll film cameras/ paintings)
- For computer processing these images have to be digitazed
- Digitized images:
- Images converted to digital image form by
- Digitizers/ Scanners
- Images acquired by digital cameras
- Produces digital images, directly
- Digital images are represented by pixels (Picture elements)
- A digital image is defined as
a two dimensional function f (x, y)
where x and y are spatial ( plane ) coordinates
f is amplitude of light intensity/ or grey level
at any pair of coordinates (x, y)
(Average value of the grey level or intensity of light
of the pixel area surrounding the point (x, y))

- Digitizing an image involves


Sampling (discretization) and
(Taking samples of grey level, along x and y coordinates)
Quantization.
(Mapping of grey levels into a grey scale
(Grey scale is a set of predefined
grey scale values: i.e. 0 - 255)
- Digital image can also be computed from:
- A geometric model or
- Mathematical formula.
- This process of the image generation is
commonly known as:
- Image synthesis / or
- Image rendering

Human Vision
- Most advanced of our senses
- Plays most important role in Human perception
- Limited to only visible band of spectrum
( Electro magnetic spectrum)

Electromagnetic spectrum
Frequency (Hz)
Wavelength (Meters)

Visible Spectrum

0.4 x 10-6 to 0.7 x 10-6 meters/ or


400 nanometers to 700 nanometers

CD
DVD
Blue ray disc

Infra red laser


Red laser
Blue laser

Light Amplification by Stimulated Emission of Radiation

Electromagnetic waves

One wave length


c
c = Speed of light 2.988 x 10 8 m/ s
= Wave length in meters
= frequency
PlanckEinstein equation:
E = h x

E is energy in electron volts


h is Plancks constant (6.582119281016 eV.s)

- Energy is proportional to Frequency

- Wavelength is inversely proportional to frequency

Images need not be limited to only visible spectrum


images can be related to entire Electro magnetic spectrum:
- Gamma Rays

- X-Rays
- Ultra violet rays
- Visible range

Human eye ( 0.4 x 10-6

- Infra red
- Microwaves
- Radio waves
- Sound waves
Ultra sound

to 0.7 x 10-6 meters)

Applications of Image Processing


- Gamma Ray imaging
- Radio isotopes are injected in a patient
3 D Image is generated by gamma-ray detectors
- X-Ray imaging
- X-rays are generated by cathode ray tube
(By electrons emitted by cathode, striking anode)
- The transmission of the X-rays falling through the
patients body, depends upon the density of the bones.
- The transmitted X-rays create an image of the internal
organs on a photographic film.
- The X-ray image can also be directly converted to
digital image by using a phosphor screen

- Ultra violet rays


These rays falling on florescence material generate
red light
- Ultra violet rays can be used for studying materials
(Fluorescence)
- Visible range
Visible blue
0.45 to 0.52 (micro meter)
- Max. water penetration
Visible green
0.52 to 0.60
- Measuring vigor of plants
Visible red
- Infra red
Near infra red

0.63 to0.69

- Vegetation discrimination

0.76 to 0.90 (micrometer) Biomass, shoreline


mapping
Middle Infra red 1.55 to 1.75
Moisture contents of soil
/ Vegetation
Thermal Infra red 10.4 to 12.5
Soil Moisture/ Thermal
mapping
Night vision

- Microwaves
Radar
- Works in ambient light and in any weather
- Penetrates clouds
- See-through ice/ dry sand
- Radio waves
- Radio Astronomy
- MRI ( Magnetic Resonance Imaging )
- Sound waves
- Acoustics
100 of Hertz
- Geographical Exploration/ Oil exploration
Industry

- Ultrasonic
Million of Hertz
Pulse Echo
- Seismic images
- Medical applications
- Electron beams
Used in electron microscopy
10,000 x amplification
- Computer generated Images
- Synthetic images
Used for 3-D modeling and visualization
Virtual Reality
- Fractal Images
Fractals is iterative reproduction of basic pattern
according to some mathematical rule

X-ray images

Angiogram

Circuit board
CT scan: Head
Computed
tomography

Thumb
print

Paper currency

Automated License Plate


reading

Baby image

Ultra sonic

Fractal images

Fractal image: A rose field

Objectives of Digital image processing:


1. Improving the pictorial information for
Human interpretations
2. Processing of Image Data for:
a. Storage

(compression)

b. Transmission

(compression)

c. Representation and
d. Automatic machine perception

Types of operations on images:


1. Image processing:
Operations on image to enhance particular feature
Image in image out

2. Image Analysis:
Understanding of image characteristics
Image in Measurement out

3. Computer vision:
To use computers to emulate human vision
- Recognizing objects in images
Being able to draw inferences, take actions
Image in High level description out

Different levels of Image processing:


1. Low level
Primitive operations:
- Noise reduction,
- Contrast enhancement,
- Sharpening
In this processing both input and output are images
2. Mid level
Segmentation, Partitioning of an image, Integration etc,
with the objective of:
a. Description of objects in the image
Computer processing
b. Classification of objects
Recognition
3. High level
Making sense of image
Recognition of objects
- Performing cognitive functions
Vision

Fundamental steps in Digital Image Processing


1. Image Acquisition
Scaling/ clipping/ rotating etc.
2. Image Enhancement
To bring out details that are obscured
To highlight certain features of interest
Image Enhancement is subjective
- Human subjective preferences
Regarding what constitute a good image
3. Image Restoration
- To improving the appearance of image
- Image improvement is objective
- In the sense that the image restoration techniques
are based on mathematical or probabilistic models
of image degradation
4. Colour Image processing
Gained importance after the use of images on Internet

5. Wavelet and multi-resolution processing


Foundation for representation of images in various
degree of resolution
6. Compression
Reducing storage requirement and bandwidth requirement
7. Morphological processing
Extracting image components that are useful in
description/ representation of images
8. Segmentation
Partitioning image into its constituents/ objects
9. Image Representation
Description of image:
- Boundary/ Region
10 Image Recognition
Assignment of labels to various constituents of an image

Image sensing and acquisition


- Light energy falling on sensors
- gets converted into voltage signals
- Digitizer/ ADC converts voltage signals to discrete, grey level values

Light Sensors Analog Voltage Digitizer Discrete values

Types of sensors:
- Single sensor:- One sensor for all pixels
A Mechanical system moves over all the parts of
a picture in front of the single sensor

- Sensor strip:- A strip of sensors:


- One sensor /pixel of a line of picture
- A mechanical system moves the strip of sensors
along the picture

- Sensor array:- An Array of sensors: One sensor /pixel of the image


- Optical system; lens/aperture and a shutter system
focuses entire picture on the sensor array

Array of Sensors:

Array of Sensors
located at image plane

Object

Continuous image
Sampling:
Quantization:

Result of image sampling


and quantization

Sensing at fixed intervals in x, y directions


Mapping intensity to a set of grey levels

Digitization of an Image
y
Pixel 0, 0

N = 16 columns

x
Value of intensity of light
or grey level
0 255 for 8-bit
Pixel coordinates
x = 3, y = 10

M = 16
rows

Digitization of an image
1. Place a grid on the image
2. Divide the image into picture elements (pixels)
3. Measure the average value of grey level at
Each grid element/ picture element ( pixel) and
Quantize the grey level value
(by mapping into the specified grey level scale)
Digital image is a function f (x, y)
where x = 0 to M-1 and y = 0 to N-1
f (x, y) is the value of pixel at (x, y), in integers (quantized)
(0 to 255 for 8-bit values)

Digital image
Y coordinate, 0 to N-1

X
coordinate
0 to M-1

Pixel Picture Element at any coordinate (x, y)


Represents average value of light intensity in
one square area of a pixel at (x, y)

Representing Digital Images


x, y coordinates
We can assume discrete values 0,1,2,3 for x and y coordinates
- Image starts from left top corner
- Intensity/ or grey level is quantized to grey scale 0 to L -1
- Values of gray scale at any coordinate (x, y) is f( x, y) :
y

f(0,0), f(0,1), f(0,2) ---- --

f(0, N-1)

f(1,0), f(1,1), f(1,2) ---------

f(1, N-1)

f(2,0) ------------------------------ f(2, N-1)


| --------------------------------------|
|---------------------------------------|
f(M-1,0)

f(M-1, N-1)

x varies from 0 to M-1,

y varies from 0 to N-1

f (x, y) = Matrix is commonly called Digital image

Common Values ( M, N and L )


- There are standard values for the various parameters encountered
in digital image processing

Parameter

Symbol

Typical values

Rows

256, 512, 525, 625, 1024

Columns

256, 512, 768, 1024,1320

Gray Levels
(Bits

L
B

2, 8, 64, 256, 1024, 4096, 16384 64K


1, 2, 4, 8,
10,
12,
14
16)

Generally:

M = N = 2B
where {B = 2, 4, 8, 10,12, 16}
L = 2 8 (8 bit, or 1 byte)

Colour Image has three components; Red, Green and Blue


- Intensity of each colour component can be represented on 256 level
grey scale. (0 to 255)
- Thus each pixel of colour image will require 3 bytes ( 24 bits)
(as compared to 1 byte (8bits) for 256 level monochrome image)

One colour
image is
split into
three
grey scale
images
by using filters

Image formation in the EYE


- Elements of Visual Perception
- Eye is nearly spherical, Diameter 20 mm
- Three membranes enclose the eye
1. Cornea and Sclera
2. Choroid
3. Retina
1. Cornea and Sclera
Outer cover
- Cornea is tough, transparent tissue,
covers anterior (frontal) surface of the eye)
- Sclera is an opaque membrane that
encloses the remainder of the optic glove

2. Choroid
- Lies directly below the sclera
- It is a network of blood vessels
- Choroid serves as a source of nutrition to the eye
- It is heavily pigmented
Which helps reduce amount of
extraneous light entering the eye and also
reduces the back scatter within the optic glove
- At the anterior (front) extreme, the Choroid is divided into:
1. Ciliary body and
2. Iris

- Iris
- Iris contracts or expands to control the amount of light
that enters the eye
- The central opening of iris varies from 2 to 8 mm
- The front of the iris contains the visible pigment of eye
- - black/ brown pigment
- Lens
- Layers of fibrous cells
attached to the Ciliary body
(Cataracts - clouding of lens)
- Focal length of the lens 14 mm to 17 mm)

- Lens absorbs 8 % of visible light and also absorbs


infrared and ultra violet rays appreciably

3. Retina
- Innermost membrane of eye
- lies inside of the walls posterior portion
- When the eye is focused,
light from the object outside the eye
is imaged on the retina
- A distribution of discrete light receptors over
the surface of retina sense the pattern of light
- There are two classes of receptors:
1. Cones and
2. Rods

:- Cones:
- About 6-7 million
- Primarily in central portion of retina
- Cones are sensitive to color
- Eye rotates till the image is focused on the cones area
- Eye resolves fine details of an image with the Cones,
- Cone vision is called: Photopic or Bright-light vision
- Rods:
- 70 -150 million, distributed over the backside of retina
- Rods gives general overall picture of fields of view
- Sensitive to overall intensity of light, not colours
- These are sensitive to low levels of illumination
- Rod vision is called:

Scotopic or dim-light vision

1. Sclera (Outer opaque cover)


2. Choroid

3. Retina

Lens Focal
Length 14 18 mm

Blind spot
1. Cornea (Outer
transparent cover)

Fovea
Visual axis

Iris diaphragm
2 - 8 mm
Central opening

Vitreous humor
Diameter = 20 mm

Ciliary muscle
Ciliary fibers

Image formation in the eye:


- Lens and the retina are fixed
- Focusing is done by varying focal length of the lens
from 14 18 mm
- Thinning the lens for focusing distant objects
- Thickening the lens for focusingnear objects

Dimension of image on retina:

h / 17 = 15/100
h = 17 x 15/100 = 2.55 mm

Images: From 1024 x 1024 pixels to

32 x 32 pixel size

(Images scanned at different pixels numbers and


enlarged to same size)

As the spatial resolution is decreased, the image start degrading

Reduction of grey levels from 256 2 levels, but


same resolution of 452 x 374 pixels
452 x 374
pixels
256 level
image

128 level
image

64 level
image
32 level

16 level
image

8 level
image

4 level
image

As the levels of intensity is the quality of image degrades

2 level
image

Basic relationship between pixels


1. Neighbor of a pixel
p (x, y)
In horizontal and vertical direction are:
(x+1, y), (x-1, y), (x, y +1), (x, y-1)

x -1
y -1

y +1
x +1

These 4 neighbors of pixel p are denoted by N 4 (p)


- Each pixel is unit distance from (x, y)
- If p is lying at border of an image, its neighboring pixels
may lie outside the digital image
The four Diagonal neighbors of pixel p (x, y)
(x+1, y+1), (x+1, y-1), (x-1, y +1), (x-1, y-1)
Diagonal neighbor pixels are Denoted by N d (p)

x-1, y-1
x+1, y+1

All 8 pixels N4(p) + N d (p) are N 8(p)


Here also some neighbor pixels may fall outside image at border

Adjacency
Let V be the set of intensity values, used to define adjacency

- If V = { 1} then adjacency of pixels with the value 1 is being


considered
- V can also be a subset of 256 values of levels ( 0 to 255 levels)
(say range of levels: 130 to 200)
1. 4-adjacency: Two pixels p and q are in 4-adjacency
with V value (subset of 0-256 values)
p
If q is in the set N 4 (p)

2. 8-adjacency:
Two pixels p and q are in 8-adjacency
with value V (subset of 0-256 values)
If q is in set N 8 (p)
p

3. Mixed adjacency

( m-adjacency )

Modification of 8 adjacency
Two pixels p and q are in m-adjacency
with value V (subset of 0-256 values) / or 1
1. If q is in set N4(p)

p
N4(p)

or
2. if q is in set Nd (p)

and

Nd(p)

q
p
q
p

the of the set N4(p) and N4(q)


has no pixels whose value from V / or 1

Pixels
in Intersection

It is introduced to eliminate ambiguity,


arising out of multiple paths in 8-adjacency
It means diagonal adjacency is to be taken only if there is
no pixel is in set V / or 1, in vertical/ or horizontal location at N4(p)

8 - Adjacency
two paths

M - path

Intersection
pixel 1

q1
Intersection
pixel 0

The two pixels at top shows multiple paths


in 8-Adjacency to the central pixel

q2

M-adjacency:
-A mixture of two, N4 and Nd adjacencies to
eliminate ambiguity of 8 -Adjacency
- If q is at diagonal point then none of the two pixels
across diagonal positions should be 1
- In diagonal path there should be no straight path
Since q1 is at diagonal and intersection of P4 and D4 is 1 so the diagonal path is
not possible
Since q2 is at diagonal and intersection of P4and D4 is 0 so the diagonal
path is possible

Digital path or curve


- A path from pixel p (x, y) to
pixel q (s, t)
is a sequence of distinct pixels with coordinates:
(x0,y0), (x1,y1) - - - - - - - to - - - (x n, y n),
where (x0, y0) = (x, y) (starting point) and
(x n, y n) = (s, t) (End point)
- Pixel (x i, y i) and (x

, y i-1) are adjacent for any value i

i-1

i lies between 1 and n,


- In such case
- If

1in

n = length of the path

(x0,y0) =(x n, y n) then path is closed

- Path can be defined as 4, 8 or m-path depending upon


the type of adjacency

Connected path in S

S
q

Connected pixels

- Let S be a subset of pixels in an image


- Let pixels p and q be two pixels in subset S
- Then pixel p and q are said to be connected
in a subset S of an image
if there exist a path between p and q
consisting of pixels, entirely from image subset S

Connected componant
- For any pixel p in S,
The set of pixels, that are connected to the pixel p in S,
is called a connected component of S

Connected set
- If it (S) has only one connected component
then set S is called connected set

Region
- Let R , a subset of pixels in an image,
R is called a region of the image
if it is is a connected set

Adjacent regions
- Two regions are said to be adjacent
if their union forms a connected set
- Adjacency can be 4 or 8

(having connected 1,s)

Disjoint regions
- Two regions that are not adjacent. (Having no 1s connected)
111
101
010
001
111
111

R i region
R j region
- Ri and Rj are adjacent, if 8-adjacency is considered
- Ri and Rj are disjoint, if 4-adjacency is considered
since 4-path does not exist, between Ri and Rj

Boundary / border or contour of region R


- Boundary is a set of pixels in the region R
that have one or more neighbor pixels
that are not in the region R
- If R happens to be the entire image ( upto the end of the image)
( Rectangle set of pixels)
then the boundary is defined as the set of pixels in:
- the first and the last rows and
- the first and last columns of the image
- This extra definition is required as
there are no neighbor pixels beyond
the border of the image
- Normally, a region is a subset of an image,
any pixels of the region, at the edge of an image
Border
are included in the boundary
1111111
1111111
1111111
1111110
1111111
111110
1111111
11100

-Edge
Difference between the edge and the boundary:
- Boundary of a region forms a closed path
it is a global concept

- Edges are formed from the pixels with derivatives


of intensity levels, exceeding certain threshold
- The edge is a local concept,
based on intensity level discontinuity
- Edge need not form a closed boundary
- Edges can be considered as intensity discontinuities

Distance measure

z (v, w)

Two distances:
1. Euclidean distance
2. Block level distance
q (s, t)
p (x, y)
D is distance function or metric if:
a.
D (p, q) is positive 0
b.
D( p, q) = D ( q, p)
d.
D ( p, z) D( p, q) + D (q, z)

( D( p, q) = 0 if

1. Euclidean distance between p and q

D e = [ (x s) 2 + (y t) 2 ]1/2
For this distance measure,
the pixels having a distance some value r from (x, y)
are the points contained in a disk of radius r

p q)

r
x, y

q (s, t)

2. D4 distance ( or City-block distance):


- D4 (p, q) = |(x s) | + | (y t)|
p (x, y)
- The pixels having a distance
D4 from (x, y) some value r
from a diamond centered at (x, y)

For example the pixels with D 4 distance 2 from (x, y


the center point
2
212
21012
212
2
The pixels with D 4 = 1 are the four neighbors of (x, y)

3. D8 distance (or Chessboard distance)


D8

= max |(x s)|, | (y t) |

The pixels with


D 8 distance from (x, y) some value r
from a square centered at (x, y)
For example , the pixels with D 8 distance 2 from the center (x, y)
22222
21112
21012
21112
22222
- Pixels with D 8 = 1 are the N8 (8 neighbors) of (x, y)
- D4 and D8 are distances between points p and q are
independent of any path that might exist between the points
- These distance involves only the coordinates of the points

However if m-adjacency is considered:


- Dm distance between two points
is defined as the shortest m-path between the points
- In this case the distance between two pixels
will depend upon the values of the pixels along the path,
and the value of their neighbor pixels.
For example:

p1
p0

p3 p 4
p2

p3 p 4
p1 p 2
p0
(1)

1. If p3 and p1 are 0s
2. if only p1 = 0
3. if only p3 = 0
4. if p1 and p4 are 1

p1
p0

p3 p 4
p2
(2)

p1
p0

p3 p 4
p2
(3)

then Distance-D, p0 and p4 is = 2,


then
=3
then
=3
then
=4

p3 p4
p1 p2
p0
(4)
(p0, p2, p4)
(p0, p2, p3,p4)
(p0, p1, p2, p4)
(p0, p1, p2, p3, p4)

Determine
1. Euclidian distance
2 . City block distance
3. Chess board distance between p and q in the following subimage)
1 2 3 4 5 6 7 8
1
2
3
4
5
6
7
8
9
10 p

p = (10, 1)

q = ( 2, 8)

D e = { (10 -2)2 + (8-1) 2 } 1/2


(8 x 8 + 7 x 7) 1/2 = (113 ) 1/2
D 4 = 8 + 7 = 15
D 8 = max. of 8, 7 = 8

Array versus Matrix operations


- An array operation involves one or more images
carried out on pixel by pixel basis
Array multiplication:
a11

a12

a21

a22

b11 b12

b21 b22

a11 x b11

a12 x b12

a21 x b21

a22 x b22

Matrix product:
a11

a12

b11 b12

a21

a22

b21 b22

a11 x b11 + a12 x b21


a21 x b11 + a22 x b21

a11 x b12 + a12 x b22


a21 x b12 + a22 x b22

Almost all the operation in image processing are pixel by pixel


array operations

Arithmetic operations on two images


f( x, y ) and g( x, y) are two images
These operations are carried out between corresponding pixels in images:
y
Sum s (x, y) = f ( x, y ) + g ( x, y)
N
Sub
d (x, y) = f ( x, y ) - g ( x, y)
Mult p (x, y) = f ( x, y ) x g ( x, y)
x
Div
v (x, y) = f ( x, y ) % g ( x, y)
M
(x is 0 1, 2, - - - M-1)
(y is 0 1, 2, - - - N-1)
s, d, p and v are images with M rows and N columns
Applications:
- Noise removal :

i=k
Average of g (x, y) = 1/ k
g i (x, y)
i =1
Where k is the number of images

- Astronomy:
- Low light levels images are totally noisy
- By adding multiple images noise can be reduced

Noise removal of a Galaxy image


5

10

Noisy image

20

50

100

Other images are result of averaging over 5, 10, 20, 50 and 100 images
- Average of grey levels for each pixel in the images

Applications of arithmetic operations:


- Enhancement of image by subtraction in radiography
- Shading correction by multiplication and division
- Finding region of interest (ROI) - - Masking

Scaling
-Images after arithmetic operations may go out of range of levels
(0 -255)
to < 0 or > 255
- First the minimum value of intensity in the image is brought to zero:
f m = f min (f)
(Intensity of every pixel Minimum intensity of pixels)
- Then scale image pixel intensities by
f s = K x ( f / max (f m )
where K is the maximum level of grey scale (255)
- Scaled value of any pixel = grey value of the pixel multiplied by
255/ maximum grey value of the pixels
in the image
- So that the intensity range of the image becomes 0 - 255

Image A
30 50
50 70
20 80

100 230
80 200
90 180

Image B

20 30
40 60
10 50

120 240
90 200
80 170

A+ B

50 80 220 470
90 130 170 400
30 100 160 350
Minimum intensity = 30
Scaled image A +B
12 29
35 58
00 41

110 255
82 215
75 186

Multiplying each pixel intensity


by 255/440 (0.58)

20 50
60 100
00 70

190 440
140 370
130 320

Subtracting 30 from each


pixel intensity
Maximum intensity = 440

Range of scaled Image is 0 - 255

Two images are subtracted then scaled to the intensity levels up to 255
- Image is obtained by taking difference of the two images
- Then it is scaled to the full range of grey values (0 to 255)
Original image

Image obtained
by setting the least
significant bit to 0

Image obtained
by scaling the image
to 0 to 255

Subtraction by a mask for an angiography image


Mask

Difference between
Mask and
Live image

Live image

Enhanced difference
image

Removal of shading effect


Original Image

Shading pattern

Processed image
- Product of original image
and
reciprocal of shading pattern

ROI - Region of Interest


Dental X-ray
Image

ROI mask for


isolating teeth
for fillings

Product of image and


mask

Logical operations:
Binary images
1 valued pixels as foreground
0 valued pixels as background
Union of two binary images

OR

Intersection of two binary images AND


Complement of an image

NOT

These operations are used extensively in morphology

Logical operations Illustrations


Foreground = white pixels

Background = Black pixels

Original mage

Original mage

AND image mask

Result of AND 0 black, White 1


operation
AND 1 1 = 1
0 0, 0 1, 1 0 = 0

OR image mask

Result of OR
OR 0 0 = 0
operation 1 1, 1 0, 0 1 = 1

End

You might also like