You are on page 1of 21

Under Water Image Enhancement using Fusion Techniques

December 30, 2012

tts

tts
s rt r rtr t
Introduction . . . . . . . . . . . . Weight Map Calculation . . . . . 0.2.1 Contrast : . . . . . . . . . 0.2.2 Saliency : . . . . . . . . . . 0.2.3 Well-exposedness: . . . . 0.2.4 Local Contrast Measure : 0.3 Image Enhancement . . . . . . . 0.4 CODE . . . . . . . . . . . . . . . . 0.5 Results . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . 0.1 0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 2 3 4 5 6 7 7 8 8

s rt r rtr t
trt

Image fusion is the process of combining information in two or more images of a scene into a single enhanced image.The aim of the fusion process is to integrate complementary data inorder to enhance the information in respective source images. The method is based on the one desribed in paper by Ancuti et al., 2012 with small changes. Here the two inputs images of fusion process are derived from gray world algorithm and global contrast stretching algorithm.The nal images is a weighted blend of the two input images.
t t

The weight map are scalar image derived from the original image to aid in the fusion process. A weigth maps are derived from each input image.The weights are calculated based on some local or global feature of pixel.

2 | 21

s rt r rtr t

Some of features commonly used are contrast,saturation and exposedness. For each pixel infromation from different measures are combined into a scalar weight map using simple additive or multiplicative functions.

trst

The contrast of image is estimated by applying a Laplacian lter to each channel of the image, and the absolute value of the lter response is taken.The laplacian lter will enhance the edges/texture in the image .Edges are textured value will give high value of contrast while homogenous regions will give low values for contrast measures. However this measure is not capable of distinguishing between a ramp and at regions of the image. However It is capable of distinguishing step edges .For underwater images this map will predominantly carry low weights .

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2
Figure 1: Contrast Weights for input images

3 | 21

s rt r rtr t

The saliency weight map aims to emphaize the discriminating objects that loose prominance in underwater scene.saliency algorithm emphaizes such objects . The comman method to detect saliency is to detect the contrast difference between image region and its surrounding which is known as center surround contrast. The image saliency algorithm is take from the paper Achanta et al., 2009

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2
Figure 2: Saliency Weights for input images

4 | 21

s rt r rtr t
sss

Looking at just the raw intensities within a channel, reveals how well a pixel is exposed. The aim is to retain pixels that are not over or under exposed.Each pixel is weighted on how it is close to 128 using a gaussian function.The gaussian function is applied to each color channel seperately and results are multiplied/added depending on requirement of application.

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2
Figure 3: exposedness Weights for input images

5 | 21

s rt r rtr t
trst sr

For underwater images the global contrast measure is not sufcient to effectively represent the contrast. A local contrast measure would be able to better represent the contrast measure at a pixel location. The impact of this measure is to strengthen the local contrast to capture ramp transitions highlighted and shadowed parts which are not captured by global contrast method. The local contrast measure is computed as the standard deviation between the pixel luminance level and its local average around a pre-dened neighborhood. This can be easily computed by rst obtaining a low pass ltered version of the image. The absolute value of difference between the image and its low pass version is used as local contrast measure map.

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2
Figure 4: local contrast Weights for input images

The weight maps are normalized so that sum of weights at each pixel location is 1. Wk = Wk K k =1 W k

6 | 21

s rt r rtr t
t

The enhanced image R( x, y) is obtained by fusing dened inputs with weight measures at every location. R( x, y) =

W k (x, y) I k (x, y)
k =1

In present application two input images are derived from the a modied global contrast stretched image and white balence image in LabColorSpace. The weight maps based on global contrast and exposedness are obtained for the set of image based on saliency,exposedness,global contrast and local contrast. Thus we have a linear combination of 2 input images and 4 weight maps a total of weight combination of 8 images. Below results only only considering individual measures using laplacian lending and using naive blending all all the masks are shown. The code for laplacian blending was take from Roy and arnon, 2010 In naive blending some artifacts are seen,the laplacian blending method would reduce this effect. Thus considering computional efciency and requirement way may choose methods corresponding to any one of the results.

for code for above routines refer the site ttsr rsst

7 | 21

s rt r rtr t
sts

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 5: Example 1

8 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 6: Example 2

9 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 7: Example 3

10 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 8: Example 4

11 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 9: Example 5

12 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 10: Example 5

13 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 11: Example 6

14 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 12: Example 8

15 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 13: Example 9

16 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 14: Example 10

17 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 15: Example 10

18 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 16: Example 11

19 | 21

s rt r rtr t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

(h) using naive blending


Figure 17: Example 12

20 | 21

r
[1] Radhakrishna Achanta et al. Frequency-tuned Salient Region Detection. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2009). For code and supplementary material, click on the url below. Miami Beach, Florida, 2009, pp. 1597 1604. doi: P . url: tt rstrtrPt. [2] Cosmin Ancuti et al. Enhancing Underwater Images and Videos by Fusion. In: CVPR. 2012. [3] Tom Mertens, Jan Kautz, and Frank Van Reeth. Exposure Fusion. In: Proceedings of the Pacic Conference on Computer Graphics and Applications, Pacic Graphics 2007, Maui, Hawaii, USA, October 29 - November 2, 2007. Ed. by Marc Alexa, Steven J. Gortler, and Tao Ju. IEEE Computer Society, 2007, pp. 382390. isbn: 978-0-76953009-3. doi: tttrstrP. [4] Roy and arnon. simple laplacian blender using opencv. 2010. url: tt rtt st s r rs.

21 | 21

You might also like