You are on page 1of 91

1

Teaching Innovation - Entrepreneurial - Global



The Centre for Technology enabled Teaching & Learning , N Y S S, India
DTEL(Department for Technology Enhanced Learning)
DEPARTMENT OF ELECTRONICS AND
TELECOMMUNICATION ENGINEERING
VII-SEMESTER
PRINCIPLES OF IMAGE PROCESSING




2
UNIT NO.3
IMAGE SEGMENTATION
UNIT 3:- SYLLABUS
DTEL
.
Image segmentation,
1
Detection of discontinuities
2
Edge linking and boundary detection,
3
Thresholding,
4
3
Region oriented segmentation
5
CHAPTER-1 SPECIFIC OBJECTIVE / COURSE OUTCOME
DTEL
Understand the Image segmentation algorithms.
1
Detect the edges and boundaries.
2
4
The student will be able to:
LECTURE 1:- Image segmentation
DTEL
Basic approaches
For Image
Segmentation
5
5
Introduction to image segmentation
The purpose of image segmentation is to
partition an image into meaningful regions
with respect to a particular application
The segmentation is based on
measurements taken from the image and
might be greylevel, colour, texture, depth or
motion
6
LECTURE 1:- Image segmentation
DTEL
6
Introduction to image segmentation
Usually image segmentation is an initial and vital step
in a series of processes aimed at overall image
understanding
Applications of image segmentation include
Identifying objects in a scene for object-based
measurements such as size and shape
Identifying objects in a moving scene for object-
based video compression (MPEG4)
Identifying objects which are at different distances
from a sensor using depth measurements from a
laser range finder enabling path planning for a
mobile robots
7
LECTURE 1:- Image segmentation
DTEL
7
Introduction to image segmentation
Example 1
Segmentation based on greyscale
Very simple model of greyscale leads to
inaccuracies in object labelling
8
LECTURE 1:- Image segmentation
DTEL
8
Introduction to image segmentation
Example 2
Segmentation based on texture
Enables object surfaces with varying
patterns of grey to be segmented

9
LECTURE 1:- Image segmentation
DTEL
9
Introduction to image segmentation
10
LECTURE 1:- Image segmentation
DTEL
10
Introduction to image segmentation
Example 3
Segmentation based on motion
The main difficulty of motion segmentation is
that an intermediate step is required to (either
implicitly or explicitly) estimate an optical flow
field
The segmentation must be based on this
estimate and not, in general, the true flow
11
LECTURE 1:- Image segmentation
DTEL
11
Introduction to image segmentation
12
LECTURE 1:- Image segmentation
DTEL
12
Introduction to image segmentation
Example 3
Segmentation based on depth
This example shows a range image, obtained
with a laser range finder
A segmentation based on the range (the object
distance from the sensor) is useful in guiding
mobile robots

13
LECTURE 1:- Image segmentation
DTEL
13
Introduction to image segmentation
14
Original
image
Range image
Segmented
image
LECTURE 1:- Image segmentation
DTEL
14
Greylevel histogram-based
segmentation
We will look at two very simple image
segmentation techniques that are based on the
greylevel histogram of an image
Thresholding
Clustering
We will use a very simple object-background test
image
We will consider a zero, low and high noise
image
15
LECTURE 1:- Image segmentation
DTEL
15
Greylevel histogram-based
segmentation
16
Noise free Low noise High noise
LECTURE 1:- Image segmentation
DTEL
16
Greylevel histogram-based
segmentation
How do we characterise low noise and high
noise?
We can consider the histograms of our images
For the noise free image, its simply two
spikes at i=100, i=150
For the low noise image, there are two clear
peaks centred on i=100, i=150
For the high noise image, there is a single
peak two greylevel populations
corresponding to object and background
have merged
17
LECTURE 1:- Image segmentation
DTEL
17
Greylevel histogram-based
segmentation
18
0.00
500.00
1000.00
1500.00
2000.00
2500.00
0.00 50.00 100.00 150.00 200.00 250.00
i
h(i)
Noise free
Low noise
High noise
LECTURE 1:- Image segmentation
DTEL
18
Greylevel histogram-based
segmentation
We can define the input image signal-to-
noise ratio in terms of the mean greylevel
value of the object pixels and background
pixels and the additive noise standard
deviation
19
S N
b o
/ =

o
LECTURE 1:- Image segmentation
DTEL
19
Greylevel histogram-based
segmentation
For our test images :
S/N (noise free) =
S/N (low noise) = 5
S/N (low noise) = 2
20
LECTURE 1:- Image segmentation
DTEL
20
Greylevel thresholding
We can easily understand segmentation
based on thresholding by looking at the
histogram of the low noise
object/background image
There is a clear valley between to two peaks
21
LECTURE 1:- Image segmentation
DTEL
21
Greylevel thresholding
22
0.00
500.00
1000.00
1500.00
2000.00
2500.00
0.00 50.00 100.00 150.00 200.00 250.00
i
h(i)
Background

Object

T
LECTURE 1:- Image segmentation
DTEL
22
Greylevel thresholding
We can define the greylevel thresholding
algorithm as follows:
If the greylevel of pixel p <=T then pixel p is an
object pixel
else
Pixel p is a background pixel
23
LECTURE 1:- Image segmentation
DTEL
23
Greylevel thresholding
This simple threshold test begs the obvious
question how do we determine the
threshold ?
Many approaches possible
Interactive threshold
Adaptive threshold
Minimisation method
24
LECTURE 1:- Image segmentation
DTEL
24
Greylevel thresholding
We will consider in detail a minimisation
method for determining the threshold
Minimisation of the within group variance
Robot Vision, Haralick & Shapiro, volume 1,
page 20
25
LECTURE 1:- Image segmentation
DTEL
25
Greylevel thresholding
Idealized object/background image
histogram
26
0.00
500.00
1000.00
1500.00
2000.00
2500.00
0.00 50.00 100.00 150.00 200.00 250.00
i
h(i)
T
LECTURE 1:- Image segmentation
DTEL
26
Greylevel thresholding
Any threshold separates the histogram into 2
groups with each group having its own statistics
(mean, variance)
The homogeneity of each group is measured by
the within group variance
The optimum threshold is that threshold which
minimizes the within group variance thus
maximizing the homogeneity of each group

27
LECTURE 1:- Image segmentation
DTEL
27
Greylevel thresholding
Let group o (object) be those pixels with
greylevel <=T
Let group b (background) be those pixels
with greylevel >T
The prior probability of group o is p
o
(T)
The prior probability of group b is p
b
(T)

28
LECTURE 1:- Image segmentation
DTEL
28
Multimodal Histogram
If there are three or more dominant modes in the
image histogram, the histogram has to be
partitioned by multiple thresholds.

Multilevel thresholding classifies a point (x,y) as
belonging to one object class
if T1 < (x,y) <= T2,
to the other object class
if f(x,y) > T2
and to the background
if f(x,y) <= T1.
LECTURE 1:- Image segmentation
DTEL
29
Thresholding multimodal histograms
A method based on
Discrete Curve Evolution
to find thresholds in the histogram.

The histogram is treated as a polyline
and is simplified until a few vertices remain.
Thresholds are determined by vertices that are local
minima.

LECTURE 1:- Image segmentation
DTEL
30
DTEL
31
THANK YOU

LECTURE 1:- Image segmentation
LECTURE 3:- Detection of Discontinuities
DTEL
32
32
Point, Line and
edge Detection
33
Detection of Discontinuities
detect the three basic types of gray-level
discontinuities
points , lines , edges
the common way is to run a mask through
the image
LECTURE 3:- Detection of Discontinuities
DTEL
33
34
Point Detection
a point has been detected at the location on
which the mark is centered if
|R| > T
where
T is a nonnegative threshold
R is the sum of products of the coefficients with
the gray levels contained in the region
encompassed by the mark.
LECTURE 3:- Detection of Discontinuities
DTEL
34
35
Point Detection
Note that the mark is the same as the mask of
Laplacian Operation (in chapter 3)
The only differences that are considered of
interest are those large enough (as determined
by T) to be considered isolated points.
|R| > T

LECTURE 3:- Detection of Discontinuities
DTEL
35
Example
36
LECTURE 3:- Detection of Discontinuities
DTEL
36
37
Line Detection
Horizontal mask will result with max response when a line
passed through the middle row of the mask with a constant
background.
the similar idea is used with other masks.
note: the preferred direction of each mask is weighted with a
larger coefficient (i.e.,2) than other possible directions.
LECTURE 3:- Detection of Discontinuities
DTEL
37
38
Line Detection
Apply every masks on the image
let R1, R2, R3, R4 denotes the response of the
horizontal, +45 degree, vertical and -45 degree
masks, respectively.
if, at a certain point in the image
|R
i
| > |R
j
|,
for all j=i, that point is said to be more likely
associated with a line in the direction of mask
i.
LECTURE 3:- Detection of Discontinuities
DTEL
38
39
Line Detection
Alternatively, if we are interested in detecting
all lines in an image in the direction defined by
a given mask, we simply run the mask through
the image and threshold the absolute value of
the result.

The points that are left are the strongest
responses, which, for lines one pixel thick,
correspond closest to the direction defined by
the mask.
LECTURE 3:- Detection of Discontinuities
DTEL
39
40
Example
LECTURE 3:- Detection of Discontinuities
DTEL
40
41
Edge Detection
we discussed approaches for implementing
first-order derivative (Gradient operator)
second-order derivative (Laplacian operator)
Here, we will talk only about their properties for edge
detection.
we have introduced both derivatives in chapter 3

LECTURE 3:- Detection of Discontinuities
DTEL
41
42
Ideal and Ramp Edges
because of optics,
sampling, image
acquisition imperfection
LECTURE 3:- Detection of Discontinuities
DTEL
42
43
Thick edge
The slope of the ramp is inversely proportional to the degree
of blurring in the edge.
We no longer have a thin (one pixel thick) path.
Instead, an edge point now is any point contained in the
ramp, and an edge would then be a set of such points that
are connected.
The thickness is determined by the length of the ramp.
The length is determined by the slope, which is in turn
determined by the degree of blurring.
Blurred edges tend to be thick and sharp edges tend to
be thin
LECTURE 3:- Detection of Discontinuities
DTEL
43
44
First and Second derivatives
the signs of the derivatives would be
reversed for an edge that transitions
from light to dark
LECTURE 3:- Detection of Discontinuities
DTEL
44
45
Second derivatives
produces 2 values for every edge in an
image (an undesirable feature)
an imaginary straight line joining the
extreme positive and negative values of the
second derivative would cross zero near the
midpoint of the edge. (zero-crossing
property)
LECTURE 3:- Detection of Discontinuities
DTEL
45
46
Zero-crossing
quite useful for locating the centers of thick
edges
we will talk about it again later
47
Noise Images
First column: images and
gray-level profiles of a
ramp edge corrupted by
random Gaussian noise of
mean 0 and o = 0.0, 0.1, 1.0
and 10.0, respectively.
Second column: first-
derivative images and gray-
level profiles.
Third column : second-
derivative images and gray-
level profiles.
LECTURE 3:- Detection of Discontinuities
DTEL
47
48
Keep in mind
fairly little noise can have such a significant
impact on the two key derivatives used for
edge detection in images
image smoothing should be serious
consideration prior to the use of derivatives
in applications where noise is likely to be
present.
LECTURE 3:- Detection of Discontinuities
DTEL
48
49
Edge point
to determine a point as an edge point
the transition in grey level associated with the
point has to be significantly stronger than the
background at that point.
use threshold to determine whether a value is
significant or not.
the points two-dimensional first-order
derivative must be greater than a specified
threshold.

LECTURE 3:- Detection of Discontinuities
DTEL
49
50
Gradient Operator
first derivatives are implemented using the
magnitude of the gradient.
(
(
(
(

c
c
c
c
=
(

= V
y
f
x
f
G
G
y
x
f
2
1
2
2
2
1
2 2
] [ ) f (
(
(

|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
=
+ = V = V
y
f
x
f
G G mag f
y x
the magnitude becomes nonlinear
y x
G G f + ~ V
commonly approx.
LECTURE 3:- Detection of Discontinuities
DTEL
50
51
Gradient Masks
LECTURE 3:- Detection of Discontinuities
DTEL
51
52
Diagonal edges with Prewitt
and Sobel masks
LECTURE 3:- Detection of Discontinuities
DTEL
52
53
Example
54
Example
LECTURE 3:- Detection of Discontinuities
DTEL
54
55
Example
LECTURE 3:- Detection of Discontinuities
DTEL
55
56
Laplacian
2
2
2
2
2
) , ( ) , (
y
y x f
x
y x f
f
c
c
+
c
c
= V (linear operator)
Laplacian operator
)] , ( 4 ) 1 , ( ) 1 , (
) , 1 ( ) , 1 ( [
2
y x f y x f y x f
y x f y x f f
+ + +
+ + = V
LECTURE 3:- Detection of Discontinuities
DTEL
56
57
Laplacian of Gaussian
Laplacian combined with smoothing to find
edges via zero-crossing.
2
2
2
) (
o
r
e r h

=
where r
2
= x
2
+y
2
, and
o is the standard deviation
2
2
2
4
2 2
2
) (
o
o
o
r
e
r
r h

(


= V
LECTURE 3:- Detection of Discontinuities
DTEL
57
58
Mexican hat
the coefficient must be sum to zero
positive central term
surrounded by an adjacent negative region (a function of distance)
zero outer region
LECTURE 3:- Detection of Discontinuities
DTEL
58
59
Linear Operation
second derivation is a linear operation
thus, V
2
f is the same as convolving the
image with Gaussian smoothing function
first and then computing the Laplacian of
the result
LECTURE 3:- Detection of Discontinuities
DTEL
59
60
Example
a). Original image
b). Sobel Gradient
c). Spatial Gaussian
smoothing function
d). Laplacian mask
e). LoG
f). Threshold LoG
g). Zero crossing
LECTURE 3:- Detection of Discontinuities
DTEL
60
61
Zero crossing & LoG
Approximate the zero crossing from LoG
image
to threshold the LoG image by setting all its
positive values to white and all negative
values to black.
the zero crossing occur between positive
and negative values of the thresholded LoG.
LECTURE 3:- Detection of Discontinuities
DTEL
61
DTEL
62
THANK YOU

LECTURE 3:- Detection of Discontinuities
LECTURE 4:- Edge Linking
DTEL
63
63
Edge linking and
boundary detection
64
Edge linking and boundary detection
Ideally, edge detecting methods should yield
pixels lying only on edges.
In practice, edge should not be complete
because of noise, nonuniform illumination,
and other effects.
Linking procedures assemble edge pixels
into meaningful edges.
LECTURE 4:- Edge Linking
DTEL
64
65
Local processing
Two factors:
the strength of the gradient operators
the direction of the gradient operators
Small analysis window
33 or 55
Two equations
A y x y x
E y x f y x f
<
s V V
) , ( ) , (
) , ( ) , (
0 0
0 0
o o
LECTURE 4:- Edge Linking
DTEL
65
66
Local processing

LECTURE 4:- Edge Linking
DTEL
66
67
Global processing via the Hough transform
Given n points in an image, suppose that we
want to find subsets of these points that lie
on straight lines.
Hough transform: xy-plane to ab-plane
LECTURE 4:- Edge Linking
DTEL
67
68
Global processing via the Hough transform
Subdividing the parameter space (ab-plane)
into accumulator cells.
Accumulator value A(i, j).
A(p, q) = A(p, q) +1
LECTURE 4:- Edge Linking
DTEL
68
69
Global processing via the Hough transform
Normal representation of a line
u u = + sin cos y x
LECTURE 4:- Edge Linking
DTEL
69
70
Global processing via the Hough transform

LECTURE 4:- Edge Linking
DTEL
70
71
Global processing via the Hough transform

LECTURE 4:- Edge Linking
DTEL
71
DTEL
72
THANK YOU

LECTURE 4:- Edge Linking
LECTURE 5:- Thresholding
DTEL
73
73
Basic global
thresholding and
Adaptive thresholding
74
Thresholding
image with dark
background and
a light object
image with dark
background and
two light objects
LECTURE 5:- Thresholding
DTEL
74
75
Multilevel thresholding
a point (x,y) belongs to
to an object class if T
1
< f(x,y) s T
2
to another object class if f(x,y) > T
2
to background if f(x,y) s T
1
T depends on
only f(x,y) : only on gray-level values Global
threshold
both f(x,y) and p(x,y) : on gray-level values and its
neighbors Local threshold

LECTURE 5:- Thresholding
DTEL
75
76
The Role of Illumination
f(x,y) = i(x,y) r(x,y)
a). computer generated
reflectance function
b). histogram of reflectance
function
c). computer generated
illumination function (poor)
d). product of a). and c).
e). histogram of product image
easily use global thresholding
object and background are separated
difficult to segment
LECTURE 5:- Thresholding
DTEL
76
77
Basic Global Thresholding
generate binary image
use T midway between the
max and min gray levels
LECTURE 5:- Thresholding
DTEL
77
78
Basic Global Thresholding
based on visual inspection of histogram
1. Select an initial estimate for T.
2. Segment the image using T. This will produce two groups
of pixels: G
1
consisting of all pixels with gray level values >
T and G
2
consisting of pixels with gray level values s T
3. Compute the average gray level values
1
and
2
for the
pixels in regions G
1
and G
2
4. Compute a new threshold value
5. T = 0.5 (
1
+
2
)
6. Repeat steps 2 through 4 until the difference in T in
successive iterations is smaller than a predefined
parameter T
o
.
LECTURE 5:- Thresholding
DTEL
78
79
Example: Heuristic method
note: the clear valley of the
histogram and the effective
of the segmentation
between object and
background
T
0
= 0
3 iterations
with result T = 125
LECTURE 5:- Thresholding
DTEL
79
80
Basic Adaptive Thresholding
subdivide original image into small areas.
utilize a different threshold to segment each
subimages.
since the threshold used for each pixel
depends on the location of the pixel in terms
of the subimages, this type of thresholding
is adaptive.
LECTURE 5:- Thresholding
DTEL
80
81
Example : Adaptive Thresholding
LECTURE 5:- Thresholding
DTEL
81
82
Further subdivision
a). Properly and improperly
segmented subimages from previous
example
b)-c). corresponding histograms
d). further subdivision of the
improperly segmented subimage.
e). histogram of small subimage at
top
f). result of adaptively segmenting d).
LECTURE 5:- Thresholding
DTEL
82
DTEL
83
THANK YOU

LECTURE 5:- THRESHOLDING
LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
Region Growing and
Split / Merge algorithms
84
84
Bahadir K. Gunturk EE 7730 - Image Analysis I 85
Region-Oriented Segmentation
Region Growing
Region growing is a procedure that groups pixels or
subregions into larger regions.
The simplest of these approaches is pixel aggregation, which
starts with a set of seed points and from these grows
regions by appending to each seed points those neighboring
pixels that have similar properties (such as gray level, texture,
color, shape).
Region growing based techniques are better than the edge-
based techniques in noisy images where edges are difficult to
detect.


LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
85
Bahadir K. Gunturk EE 7730 - Image Analysis I 86
Region-Oriented Segmentation
LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
86
Bahadir K. Gunturk EE 7730 - Image Analysis I 87
Region-Oriented Segmentation
LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
87
Bahadir K. Gunturk EE 7730 - Image Analysis I 88
Region-Oriented Segmentation
Region Splitting
Region growing starts from a set of seed points.
An alternative is to start with the whole image as a single
region and subdivide the regions that do not satisfy a
condition of homogeneity.
Region Merging
Region merging is the opposite of region splitting.
Start with small regions (e.g. 2x2 or 4x4 regions) and merge
the regions that have similar characteristics (such as gray
level, variance).
Typically, splitting and merging approaches are used
iteratively.


LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
88
Bahadir K. Gunturk EE 7730 - Image Analysis I 89
Region-Oriented Segmentation
LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
89
DTEL
90
THANK YOU

LECTURE 6:- REGION ORIENTED SEGMENTATION
DTEL
References Books:
1. Digital Image Processing, by R.C. Gonzalez &
R.E. Woods, 2nd edition, Addison
Wesley/Pearson education publication 2002.

2. Fundamentals of Digital Image processing by
A. K. Jain, PHI publication, 2nd edition

91
References Web:

You might also like