You are on page 1of 7

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014

17 19, July 2014, Mysore, Karnataka, India

INTERNATIONAL JOURNAL OF ELECTRONICS AND


COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)
ISSN 0976 6464(Print)
ISSN 0976 6472(Online)
Volume 5, Issue 8, August (2014), pp. 178-184
IAEME: http://www.iaeme.com/IJECET.asp
Journal Impact Factor (2014): 7.2836 (Calculated by GISI)
www.jifactor.com

IJECET
IAEME

IMPROVED NONLOCAL MEANS BASED ON PRE-CLASSIFICATION AND


INVARIANT BLOCK MATCHING
Chethan K1,
1, 2, 3, 4

Bindu N S2,

Abhishek P3,

Jayanth C K4

ECE Department, VVCE, Gokulam, Mysore, India

ABSTRACT
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Keywords: Gaussian Blur, Image Denoising, K-Means Clustering, Moment Invariants,
Nonlocal Means (NLM).
1. INTRODUCTION
Image denoising is often applied in display systems to improve the image quality, because
source images are usually corrupted by various additive noises. There are many denoising methods
in both spatial and frequency domains. Among spatial domain methods, prevailing techniques
include bilateral filter[1], trained filter [2], K-SVD [3], and nonlocal means (NLM)-based filters, etc.
State-of-the-art transform domain algorithms are Gaussian Scale Mixture Model based method [4],
Steins Unbiased Risk Estimate (SURE)-LET [5] and Block Matching and 3-D filtering (BM3D) [6].
As transform-based methods require complex Fourier or wavelet transforms, which are usually not
affordable by display devices due to hardware limitations, spatial techniques tend to be more
practical. Many natural or texture images contain repetitive patterns. One of the popular denoising
methods, NLM [7], exploits this image characteristic and produces promising results both objectively
and subjectively. The main idea is to replace each pixel with a weighted average of other pixels with
similar neighborhoods. The main difference between NLM and previous approaches is that the
weights in the NLM filter do not depend on the spatial distance between target patches and
candidates but depend on the difference of intensity values.

178

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 19, July 2014, Mysore, Karnataka, India

The original NLM algorithm is computationally intensive, especially its full search.
Accordingly, there has been a lot of work focusing on this issue. The most time-consuming part of
NLM is weight calculation, so a lot of methods are dominantly based on how to eliminate dissimilar
patches before weighted averaging. In [8], pre-selection of contributing neighborhoods based on
mean and gradient values was proposed. Similarly, local variance [9] and singular value
decomposition (SVD) [10] have been introduced to eliminate dissimilar pixels. In order to accelerate
the weight calculation, fast Fourier transform (FFT) has been proposed in [11], which is
approximately 50 times faster than the original NLM. The approach in [12] exploits the symmetry in
the weight function, and computes Euclidean distance by a recursive moving average filter
symmetrically, which also considerably improves the efficiency. Pang et al. [13] utilized several
critical pixels in the center instead of all pixels in the neighborhood. For the improvement of
quantitative and qualitative results, the tuning of the smoothing parameters has been proposed in [9].
In [14], a family of non-local image smoothing algorithms were designed which approximate
the application of diffusion partial differential equation (PDE)s on a specific Euclidean space of
image patches. It can preserve the structures in the original image domain. In order to increase the
number of reliable candidates of noisy target patches, the authors in [15] proposed RIBM for
nonlocal image denoising, which involves several steps such as estimating the rotation angle,
rotating the block via interpolation and then applying standard block matching. In our method, we
focus on improving the denoising performance of NLM by the means of finding reliable candidate
sets. Though previous methods [10], [15] have attempted to provide better candidates for weighted
averaging, our approach is unique in that it exploits moment invariants in pre-selection and row
image weighted averaging for performance improvement. The experimental results show that this
method outperforms the original NLM in terms of both quantitatively and visual quality.
The rest of this paper is organized as follows. Related work on NLM is summarized in
Section 2. The proposed improvements on NLM are described in Section 3. In Section 4,
experiments and results are presented. Section 5 provides the conclusion and future work.
2. EXISTING METHOD
The idea of NLM is based on the fact that patches in an image always have self-similarity.
Given a noisy image V={v(i)|i} R2, the restored intensity of the pixel, N(v)(i) is a weighted average
of all intensity values within the neighbourhood I . Let us denote [7]

w (i , j ) v ( j )
NL(v)(I)= j I

(1)

Where v is the intensity function, v(j) is the intensity at pixel j, and w(i, j) is the weight
assigned to v(j) in the restoration of pixel i. The weight can be calculated by [7]
1 |v ( Ni ) v ( Nj )|2 / h 2
e
Z
(
i
)
W(i,j)=

(2)

Where Ni denotes a patch of fixed size and it is cantered at the pixel i. The similarity |V(Ni)v(Nj)|2 is measured as a decreasing function of weighted Euclidean distance. a>0 is the standard
w(i, j )
deviation of the Gaussian kernel, Z(i) is the normalization constant with Z(i)=
, and h acts
as a filtering parameter. This method is computationally expensive and time consuming. The quality
of the reconstructed image is poor when noise is high. In the proposed method the set of reliable

179

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 19, July 2014, Mysore, Karnataka, India

candidates that are similar to current patch is increased by clustering based on similarities and row
image weighted averaging.
3. PROPOSED METHOD
In the proposed algorithm we are trying to improve the denoising performance of NLM by
the means of finding more reliable candidate sets based on similarities. Improved NLM can be
divided into Pre-processing, Feature extraction, clustering, and row image weighted averaging.
3.1 Pre-processing
In pre-processing Gaussian function is convolved with the noisy image to obtain Gaussian
blurred image. This step removes high frequency noise and smoothens the noisy image. These are a
type of low pass filters which are applied before feature extraction. Gaussian filter provides the preprocessing for pre-classification. They are a class of linear smoothing filters with weights chosen
according to a Gaussian function. It is a very good filter to remove noise drawn from a normal
distribution. The 2D zeromean discrete Gaussian function used for a mask defined by (2m+1) x
(2m+1) with centre (0,0) and x,y ranging from (-m,-m) to (m,m) is denoted by
1
x2 + y2

2
2
G(x,y)= 2 e 2

(3)

Where x,y={-m,.....,0,....,m} and is the standard deviation of the Gaussian distribution.


Normalization is necessary if we need to obtain the brightness level of the image
m

G ( x, y)
Sum =

x=m y =m

G ( x, y )
Gk(x,y) = Sum

(4)

(5)

The result of Gaussian blur for the whole image is given by


Gb = Gk * v

(6)

Where v is the intensity of the input noisy image and denotes the convolution operation. In
our implementation, a large is not necessary, because most details of the input noisy image should be
retained and Gaussian blur with a large might introduce artifacts. determines the width of the filter
and hence the amount of smoothing. After smoothing the image the filtered image is divided into
patches of appropriate size. These patches serve as an input to feature extraction block. It is
important to determine the size of the patches because if the size of the patch is large then the quality
of the reconstructed image will be poor leading to less PSNR value. If the size of patch is small then
there will be less reliable candidate for weighted averaging.

3.2 Feature extraction


Feature extraction is a special form of dimensionality reduction, in which we transform the
input data in to set of features. Feature set will extract the relevant information from the input data in
order to perform desired task using this reduced representation instead of full size input. Feature
extraction is used in many algorithms such as face recognition, pattern recognition ect. In feature
180

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 19, July 2014, Mysore, Karnataka, India

extraction moment invariants is applied on raw image patches to obtain moment vectors. Higher
moment invariants were demonstrated to be more vulnerable in the case of additive white noise.
Therefore, in the proposed algorithm, Hus moment invariants are applied, which has the highest
order of 2, as feature descriptor for clustering. Given an image and an patch which is centered at
location, the moment invariants of this patch can be represented by a vector. Then, for the whole
image, such vectors which serve as the input vectors of the clustering HUs Moment invariants are
widely applied to image pattern recognition in a variety of applications due to its invariant features
on image translation, scaling and rotation. It derives six absolute orthogonal invariants and one skew
orthogonal invariant based upon algebraic invariants. Hus moments are rotational invariant which
means that even if the patches is rotated by some angle or mirrored then also the moment values will
be the same hence they are clustered under same group in later sections.

3.3 Clustering
Clustering is a method of quantizing the vectors. In the proposed algorithm adaptive k-means
clustering is used for vector quantization. Clustering is performed to obtain cluster of similar patches
based on moment features. Here HUs moment features are served for adaptive K-means clustering.
In k-means clustering, the data is clustered randomly. To avoid this Davis-bouldin formula is used to
get the best number of cluster, it can be defined as,

1
DBI = M

(7)

i =1

The adaptive K-means clustering algorithm starts with the selection of K elements from the
input data set. In each cluster it decides the number of comparisons for each search. Adaptively
classify the acquired data by choosing appropriate centroid. Given a set of observations (x1, x2, ,
xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n
observations into k sets (k n) S = {S1, S2, , Sk} so as to minimize the within-cluster sum of
squares (WCSS)
2

| x( j ) i |

argsmin i =1 x ( j )s ( i )

(8)

Where i is the mean of points in Si.

3.4 Row image and weighted averaging


The clustered patches have similarities in terms of intensity shape and size. Patches in same
cluster has more similar neighbourhood. A row image is constructed for each cluster hence for n
clusters there will be n number of row images. Finally NLM is applied for each row image. The
NLM filtered images are reconstructed by replacing each corresponding patches in the denoised
image.
The differences between our approach and NLM are as follows.
1. Gaussian blur provides the pre-processing for pre-classification. The effect is illustrated in
Fig. 2. In the original NLM, there is no pre-processing step.
2. K-means clustering on moment invariants of the blurred noisy image serves as the preclassification for our filtering process. In the original NLM, all target patches have fixed
candidate sets, which is either the whole image or the neighbourhood centred at them. The
figure below shows the block diagram of proposed algorithm.

181

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 19, July 2014, Mysor
Mysore, Karnataka, India

Fig. 1: Proposed Method


4. EXPERIMENTAL RESULTS
In our experiments, the image data set is defined as:1.ti
as:1.tif, 2.tif, 3.tif,
tif, 4.tif,5.tif.6.tif
4.tif,5.tif.6.tif.
For performance evaluation, we compare our proposed method with the original NLM and a recent
related method [15] based on this dataset. The evaluation metrics we adopt in our experiments are
mean square errorr (MSE) and peak signal-to-noise
signal noise ratio (PSNR) PSNR is employed to provide
quantitative evaluations of the denoising results. MSE and PSNR are defined as:
MSE=

1 m 1 n 1
[ I (i, j ) = K (i, j)]2
m * n i =0 j =0

(9)

Where I(I,j) is the original image, K(I,j) is the noisy image m,n is the size of the image.
PSNR value can be calculated using MSE value as
PSNR=10log10(

MAX I2
)
MSE

(10)

Where MAXI is the maximum range of intensity and MSE is the mean square error.
4.1 Parameters of Clustering
We implemented our clustering method based on moment invariants. For standard K-means
K
clustering, there are several parameters which need to be decided. The type of distance we use, the
number of clusters we assign, and the length of vectors we use in our NLM based framework. Here
we exploit the Euclidean distance for measuring the distance between two feature vectors as paper
[10] did. According to [16], we choose the patch size as 5X 5. To test how the performance of the
method varies with different valuess of K, we vary K in the range of 400 and 500. The changing
trends of PSNR are roughly the same: when K becomes larger, there are more clusters represent
different types of details. However, if K goes too high, some clusters will not have enough
candidates.. As a result, the PSNR go down after the climax. Therefore, if complexity is not a
concern, we can choose the optimal value of K depending on the size of the input noisy image. For
our testing set, all the images are 225X225, so we choose K=1800 (when K=2800
K=2800 it takes more than
twice of the time as takes.) to guarantee enough candidates for each patch according to the variation
of visual results when we change K .

182

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 19, July 2014, Mysor
Mysore, Karnataka, India

Fig. 2: Experimental Results (A) Original Image, (B) Noisy Image, (C) Existing NLM,
(D) Proposed NLM, (E) Gaussian Blur
The difference in visual quality between the two methods can be inspected in the examples
shown in Fig. 2. We observe that the proposed method can not only preserve better details but also
remove severe noise. The method in [15] employs
employs RIBM but it is applied to neighborhoods, which
may cause lack of proper candidates when the variation of the textures is strong. Our algorithm
overcomes this by obtaining sufficient reliable candidates from Kmeans
K means clustering. We can see that
the original
iginal NLM is almost ineffective. When the noise level is high, the intensity based matching
between patches is vulnerable to noise. Our scheme has adopted Gaussian blur as pre-processing
pre
and
moment invariants are robust in noise inference as well. Our algorithms
algorithms preserves the main structures
much better compared to other approaches (the original NLM).
NLM). It demonstrates that using clustering
before weighted averaging can ensure most patches to get reliable candidates.
5. CONCLUSION
In this paper, we proposed an improved NLM method. It applies moment invariants based
K-means
means clustering on the Gaussian blurred image, which provides better classification before
weighted averaging. Experimental results show that clustering on moment invariants is very effective
for pre-classification.
classification. The proposed algorithm can effectively reconstruct finer details and at the
same time introduce fewer artifacts than the other methods.
The K-means
means clustering used in our proposed method is a time-consuming
time consuming part. In future
work, we will investigate more efficient clustering methods to speed up the pre-classification
pre classification step.
6. REFERENCE
[1]
[2]

[3]
[4]

C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images, in Proc. 6th Int.
Conf. Computer Vision, 1998, pp. 839846.
839
L. Shao, H. Zhang, and G. de Haan, An overview and performance evaluation of
classification-based
based least squares trained filters, IEEE Trans. Image Process., vol. 17,
pp. 17721782, Oct. 2008.
M. Protter and M. Elad, Image sequence denoising via sparse and redu
redundant
representations, IEEE Trans. Image Process., vol. 18, pp. 2735,
27 35, Nov. 2003.
G. Varghese and W. Zhou, Video denoising based on a spatiotemporal Gaussian scale
mixture model, IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 7, pp. 10321040,
1032
Jul. 2010.

183

Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 19, July 2014, Mysore, Karnataka, India

[5]

[6]

[7]
[8]
[9]

[10]

[11]

[12]
[13]

[14]

[15]

F. Luisier, T. Blu, and M. Unser, SURE-LET for orthonormal wavelet domain video
denoising, IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 6, pp. 913919,
Jun. 2010.
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-D
transform-domain collaborative filtering, IEEE Trans. Image Process., vol. 16,
pp. 20802095, Aug. 2007.
A. Buades, B. Coll, and J. M. Morel, A review of image denoising algorithm, with a new
one, IEEE trans, vol. 4, pp. 490530, 2005.
M. Mahmoudi and G. Sapiro, Fast image and video denoising via nonlocal means of similar
neighborhoods, IEEE Signal Process. Lett., vol. 12, pp. 839842, Dec. 2005.
P. Coupe, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot, An optimized
blockwise nonlocal means denoising filter for 3-D magnetic resonance images, IEEE Trans.
Med. Imag., vol. 27, no. 4, pp. 425441, Apr. 2008.
T. Thaipanich, O. B. Tae, W. Ping-Hao, X. Daru, and C. C. J. Kuo, Improved image
denoising with adaptive nonlocal means (ANL-means) algorithm, IEEE Trans. Consum.
Electron., vol. 56,no. 4, pp. 26232630, Nov. 2010.
J. Wang, Y. Guo, Y. Ying, Y. Liu, and Q. Peng, Fast non-local algorithm for image
denoising, in Proc. IEEE Int. Conf. Image Process, Atlanta, GA, USA, 2006,
pp. 14291432.
B. Goossens, H. Luong, A. Pizurica, and W. Philips, An improved non-local denoising
algorithm, in IEEE trans, Tuusalu, Finland, 2008, pp. 143156.
P. Chao, O. C. Au, D. Jingjing, Y.Wen, and Z. Feng, A fast NL-means method in image
denoising based on the similarity of spatially sampled pixels, in Proc. IEEE trans, on
Multimedia Signal Processing, Rio de Janeiro, Brazil, 2009, pp. 14.
D. Tschumperle and L. Brun, Non-local image smoothing by applying anisotropic diffusion
PDEs in the space of patches, in Proc. IEEE Int.Conf. Image Process., Cairo, Egypt, 2009,
pp. 29572960.
G. Sven, Z. Sebastian, and W. Joachim, Rotationally invariant similarity measures for
nonlocal image denoising, J. Visual Comm. And Image Represent., vol. 22, pp. 117130,
Feb. 2011.

184

You might also like