You are on page 1of 10

A

DYNAMIC THRESHOLD APPROACH


FOR
VIDEO OBJECT EXTRACTION
Presented By

Nijo John
Reg.No. 622112401011
ME-Applied Electronics
Paavai Engineering College
Guided By

Mr. Sakthivel. V
Assistant Professor
Dept. of ECE

OBJECTIVE
To extract the video object automatically by
Background Subtraction using dynamic threshold
detection and determination of object parameters.

EXISTING TECHNOLOGIES
Visual Saliency
Motion Saliency
A moving pixel qt at frame t is determined by

denotes the pixel pair detected by forward or


backward optical flow propagation.
3

LIMITATIONS
Not possible to work with videos with moving background.
Extracts foreground objects with missing parts.

Compression is just one possible use for segmentation


algorithm based on visually salient information.

The method is not much efficient when the objects moving


very fast

SUMMARY OF LITERATURE SURVEY


No.

Title
Technique Used
Saliency based video segmentation
Markov random
1 with graph cuts and sequentially
field (MRF) model
updated priors
Saliency detection using
2
maximum symmetric surround

Automatic object extraction in


single concept Videos

Visual Saliency From Image


4 Features With Application to
Compression

Saliency
computation
methods
Object Modeling
And Extraction

Visual Saliency

Drawbacks
Segmented regions
were randomly
switched
The saliency maps
generated by this
method suffer from low
resolution
Some of the motion
cues might be
negligible due to low
contrast
Compression is just one
possible use for
segmentation algorithm
based on visually
5
salient information.

SUMMARY OF LITERATURE SURVEY Contd.,

Visual Attention Detection In


5 Video Sequences Using
Spatiotemporal Cues
Background modeling using
6 mixture of Gaussians for
foreground detection

Spatiotemporal
saliency map

Fail to highlight the


entire salient region

Background
modeling

Leading to misdetection
of foreground objects
and background

Hierarchical
Efficient hierarchical graph-based
Spatio-Temporal
video segmentation
Segmentation
Energy
Fast approximate energy
8
minimization via
minimization via graph cuts
graph cuts
A framework for fast interactive Feature
9 image and video segmentation
Distribution
and matting
Estimation
Interactive video segmentation
Interactive
10 supported by multiple modalities Segmentation
with an application to depth maps Correction
7

There is a restriction on
the size of the video
Produces only low
energy
Does not work when
distributions are
significantly overlapped
Not much efficient when
the objects moving very
6
fast

A Dynamic Threshold Approach for


Video Object Extraction
The Background Subtraction for accurate moving
object detection from dynamic scene using dynamic threshold
detection and determination of object parameters

BLOCK DIAGRAM
Input
Video

Frame
Separation

Frame Subtraction
method

Dynamic
Threshold

Morphological Process

Connected
Component Analysis
8

Frame Subtraction
Preprocessing
Background model
Subtraction and Update of the Background Model
Foreground/background classification

Dynamic Threshold Method


T= T [x, y, p(x, y), f(x, y)]
Average, median, and the average between the
minimal and the maximal gray level in the
neighborhood.

Morphological Filtering
Erosion
Dilation
10

Parameters Analysis

MSE
MSE=(input output) 2N

PSNR
PSNR = 10 log10(255^2)/MSE

11

ADVANTAGES
Less sensitive to noise
Automatic background updating model
Accuracy is more
Possible to work with videos with moving back
ground

12

APPLICATIONS
Video surveillance
Object detection
People counting

13

RESULTS

14

Results Contd..,

Input
Frames
Extracted
Objects

15

Comparison Table
Parameters

Existing Techniques

Proposed Technique

SENSITIVITY

91.428

97.1142

MSE

0.5894

0.20842

PSNR

46.7856

54.9414

16

CONCLUSION AND FUTURE ENHANCEMENT

Compared with unsupervised Video Object


Extraction methods, this approach was shown
to better model for the foreground object
extraction.
With the help of better algorithm for
computing the threshold level, in future we can
increase the accuracy of the extracted objects.

17

References
1.

2.

3.

4.

5.

Exploring Visual and Motion Saliency for Automatic Video


Object Extraction Wei-Te Li, Haw-Shiuan Chang, Kuo-Chin
Lien, Hui-Tang Chang, and Yu-Chiang Frank Wang, Ieee
Transactions on Image Processing, Vol. 22, No. 7, July 2013
Visual saliency from image features with application to
compression, P. Harding and N. M. Robertson, Cognit. Comput.,
vol. 5, no. 1, pp. 7698, 2012.
Automatic object extraction in single concept videos, K.-C. Lien
and Y.-C. F. Wang, in Proc. IEEE Int. Conf. Multimedia Expo, Jul.
2011,pp. 16.
Key-segments for video object segmentation, Y. J. Lee, J. Kim,
and K. Grauman, in Proc. IEEE Int. Conf. Comput. Vis., Nov.
2011,pp. 19952002.
Saliency detection using maximum symmetric surround, R.
Achanta and S. Ssstrunk, in Proc. IEEE Int. Conf. Image
18
Process., Sep. 2010,pp. 26532656.

References contd..,
6.

Efcient hierarchical graph based video segmentation, M.


Grundmann, V. Kwatra, M. Han, and I. Essa, in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit., Jun. 2010.
7. Saliency-based video segmentation with graph cuts and
sequentially updated priors, K. Fukuchi, K. Miyazato, A. Kimura,
S. Takagi, and J. Yamato, in Proc. IEEE Int. Conf. Multimedia
Expo, Jun.Jul. 2009, pp. 638641.
8. Background modeling using mixture of Gaussians for foreground
detectionA survey, T. Bouwmans, F. E. Baf, and B. Vachon,
Recent Patents Comput. Sci., vol. 3, no. 3, pp. 219237, 2008.
9. A geodesic framework for fast interactive image and video
segmentation and matting, X. Bai and G. Sapiro, in Proc. IEEE
Int. Conf. Comput.Vis., Oct. 2007.
10. A. Yilmaz, O. Javed, and M. Shah, Object tracking: A survey,
ACM Comput. Surv., vol. 38, Dec. 2006.
19

Thank You...

20

10

You might also like