You are on page 1of 10

Analysis of WM Filters

Performance Analysis of Weighted Median (WM) Filters in Prediction of Images


Pranam Janney & Guang Deng Department of Electronic Engineering La Trobe University, Melbourne, Australia pranamjanney@yahoo.com

ABSTRACT:
Analysis is performed on weighted median filters given a group of predictors. The tests were performed on different test images. The report also presents a brief explanation for choosing the proposed methods of taking weighted median of a group of predictors as an alternative and a competitive adaptive image prediction method.

1. INTRODUCTION:
Signal processing always provides challenges when it comes to approximating or predicting the next sample. The main reason being that samples have abrupt changes between each other, thus its an arduous task to predict these abrupt changes. Linear filters were introduced to counter these challenges. But Linear filters were not optimal class filters and often unable to recover the desired signal effectively if the governing distribution of the corrupting noise samples is other than Gaussian [1]. Thus weighted median (WM) filters were first introduced as generalisation standard median filters, where a non-negative integer weight is assigned to each position in the filter window and a median value is chosen using the sample and their corresponding weights.

La Trobe University Melbourne, Australia

Analysis of WM Filters

2. LITERATURE REVIEW: 2.1Group of Predictors:


In lossless compression the algorithm is designed in a way to scan the input image matrix row by row, predicting each pixel as a linear combination of previously predicted pixels and encoding the prediction error. The standard uses totally eight different kinds of predictors as listed in Table 1[2]. Even with these eight predictors, we have to use the best predictor possible for that particular image, thus we base our selection criterion for the best predictor depending on different parameters. Even though selected by the group of predictors in [2], the best predictor is not optimized; we have to improve on the selected predictor to optimize it. This leads us to the optimisation problem wherein we have to maximise the predictor:

Po = max Po Prob (Po | {Po, P 2......P7 }, I )

( eqn 2.1)

2.2 Improvement on simple predictors:


Considering X to be the original image and P to be the prediction output. Then, the error e is given by e=XP The predicted output can be expressed as:

P = i Pi
Mean square error of the prediction is: L = E[ e2 ]

La Trobe University Melbourne, Australia

Analysis of WM Filters
Table 1. JPEG Predictors for lossless coding [2]

Where E = represents the correlation. In case of discrete variables, the above equation can be read as: L = ei2 P( ei ) For minimum error: dL / di = 0 Considering one of the predictors: (eqn 2.2)

S [ n ] = a1 S[ n 1 ] + a2 S[ n 2 ] L = ( S [ n ] - S [ n ] )2
dL/di = 0

(eqn 2.3)

Where S [ n ] is the predicted value of the pixel S [ n ]. Thus we have


(eqn 2.4)

( S[ n ]. S [ n 1 ] ) = (a1 S[ n 1 ] 2 + a2 S[ n 2 ]. S[ n 2 ] )

La Trobe University Melbourne, Australia

Analysis of WM Filters
For the first order predictor coefficients, i = 1; if R(x , y ) represents the correlation coefficient between two variables x and y, then R( 1 ) = a1 R( 0 ) + a2 R( 1 ) i = 2, R( 2 ) = a1 R( 1 ) + a2 R( 0 ) Representing the correlation coefficients in the form of a matrix R( 0 ) = R( 1 ) Or Rxa=r equation can also be generalised for any n > 2. R( 0 ) R( 1 ) . R( n 1) R( 1 ) R( 2 ) . R( n 2 ) R= : : R(n-1) R(n-2) : : R( 0 ) (eqn 2.5) Where R, a and r represent the above three matrices respectively.The above R( 0 ) a2 R( 2 ) R( 1 ) a1 R( 1 )

The matrix R is in the circular Toeplitz form. Assuming the Markovs model for the above equation, we have: R( k ) = | k | reduces to: max Prob ( 2 , / { S[ n ]} , I ) (eqn 2.7) (eqn 2.6) Where and are constants. Now, the optimisation problem (eqn 2.1)

La Trobe University Melbourne, Australia

Analysis of WM Filters
From the above equation (eqn 2.2), we can see that probability is a function of the constants, which means by varying the constants we can optimise the predictor. Constants are henceforth called the Weights. For a Laplacian model we have [3]: Prob ( x k | , I ) = Where

1 2 k

e | xk |/ k

(eqn 2.8)

is the parameter of interest, and 2 2 is the variance. The


2

optimised prediction is then given by the Weighted Median filter output of the predictions, using 1/ k
[4]

as the weight. Thus the parameter of interest

can be now represented as:

= WM ({Pi , i }n =1:N )
Pi = WM ({Pi , i }n =1:N ) ,
Now considering a simple case of Laplacian Distribution, we have

(eqn 2.9)

In our case, the parameter of interest is the optimised predictor, thus we have (eqn 2.10)

2 j

(if i j )

(eqn 2.11)

Thus the weighted median reduces down to a simple median; therefore the optimised prediction solution reduces to:

Pi = Median ({Pi })

(eqn 2.12)

3. WEIGHTED MEDIAN FILTERS:


The median is the maximum likelihood estimate of the signal level in the presence of uncorrelated additive biexponentially distributed noise [5]. Weighted Median filters belong to the broader class of stack filters tools. In

La Trobe University Melbourne, Australia

Analysis of WM Filters
the binary domain WM filters are self dual, linear separable positive Boolean functions [5].

3.1 Definition:
3.11 Positive integer weights: For the discrete time continuos valued input vector, X = [X1, X2, X3XN], The output Y of the WM filter of width N with corresponding integer weights: W = [W1, W2, W3WN], is given by the filtering procedure [5] : Y = MED [W1*X1, W2*X2WN*XN] (eqn 3.1) Where MED is the median operation and * denotes Multiplication, The median value is chosen from the sequence of the products of the samples and their corresponding weights. 3.12 Positive non-integer weights: The weighted median of X is the value minimizing the expression [5] L( ) = 3.13 Filtering procedure: Sorting the samples inside the filter window; adding up the corresponding weights from the upper end of the sorted set until the sum just
N i =1

Wi | X i |

(eqn 3.2)

La Trobe University Melbourne, Australia

Analysis of WM Filters
exceeds half of the total sum of weights, i.e.,

1 2

N i =1

Wi ; output of the WM

filter is the sample corresponding to the last added weight.

4. APPLICATIONS: 4.1 Prediction and Filtering:


Median filtering was performed on various images using weights. Analysis was performed using MATLAB (ver. 6.1). For analysis purposes the entropy and the signal to noise ratio were used for the prediction errors and the predicted image, respectively. The signal to noise ratio (dB) was calculated by SNR = 10 * log10

( xij Pij ) 2 xij


2

(eqn 4.1)

Where xij = original pixel value and Pij = predicted pixel value. Signal to
noise ratio is calculated for the predicted image with respect to the original image. Histogram of an image can be defined by

n=

[mpv (1), mpv ( 2),................, mpv ( N )]mpv 0 , N

(eqn 4.2)

Where n = histogram of the image, mpv = number of image pixels with pixel value (pv) and N = total number of image pixels. Thus entropy was calculated using:
Np i =1

E=

ni log ni , log e 2

(eqn 4.3)

Where E = entropy of the image and Np = total number of histogram values.

La Trobe University Melbourne, Australia

Analysis of WM Filters
Table 2. Entropy of the prediction errors

Weights Images 1 / (Ent) 1/N 1 / rand 1 / SSE


2

Rice 3.8195 3.7379 3.9991 4.0572 3.9996 1 / (Var)2

Saturn 2.4249 2.3324 2.5422 2.5054 2.1702

Tire 4.6929 4.566 4.8552 4.9952 4.495

Circuit 4.2943 3.9762 4.6763 4.8408 3.927

The weights in each case were assigned using different parameters. There are two kinds of weights assigned: global weights (entropy (Ent), variance (Var), random (rand) etc), Simple median using the number of predictors (N) and local weights (Sum of Squared Errors (SSE)). When the weights are assigned considering the whole image it is called the global weights, when SSE is used as weights it is called the local weight, because the SSE is independent for each pixel. Experimental tests were conducted on various images and the results obtained are shown in Table 2 and Table 3. Table 2,shows the entropy of the prediction errors for different images. The entropy of the prediction error is quite less when the weights are assigned using the variance parameter. When random weights were used the results for some of the images were good, but random weights cannot be taken into consideration due to the fact that the methodology of getting these random weights is a random process and the probability of getting the best results is very low. . When localised weights are used the results are also better in cases where the image was large (eg: Saturn.tif(328 x 438)). Thus by using the variance as the weights for a medium sized image or localised

La Trobe University Melbourne, Australia

Analysis of WM Filters
Table 3. Shows the signal to noise ratio (dB) of the predicted image for different images.

Weights Images 1 / (Ent) 1/N 1 / rand 1 / SSE


2

Rice 20.3616 20.9825 19.5056 18.9662 19.6206 1 / (Var)2

Saturn 26.1712 27.3062 24.5698 21.9564 28.3209

Tire 17.4974 18.2163 15.5827 15.4580 18.6114

Circuit 20.2415 20.9169 18.4834 17.7468 20.9958

weights for a large image, the prediction error entropy decreases denoting that it is the best possible prediction. From Table 3, we can see that the signal to noise ratio is consistently high for predicted images derived using variance assigned to weights especially for medium sized images. Here again the usage of localised weights has shown better results in cases where the image is too large (eg: Saturn (328 x 438)). Higher signal to noise ratio indicates that the even though there is noise addition due to this prediction process, the noise rejection capacity is high; thus using variance as weights on whole image or localised weights for large image would result in better noise rejection than others.

5. CONCLUSION:
From the experimental results i.e. Table 2 and Table 3, we can see that using particular weights for prediction could result in lower prediction errors and hence resulting in better signal to noise ratio compared with the original image.

La Trobe University Melbourne, Australia

Analysis of WM Filters
In conclusion, weighted median of a group of predictors could be an alternate and a competitive method for adaptive image prediction technique. From our experiments we could suggest that variance can be used as global weights for attaining the best possible weighted median filter with better prediction and noise rejection capacity especially for medium sized images, but in cases where the image size is large, then the localised weights seem to be better than variance.

6.REFERENCES:
[1] L. Yin, Y. Neuvo, Fast adaptation and performance characteristics of FIR-WOS Hybrid filters, IEEE Trans. Signal Processing, v. 42, issue:7,pp. 1610-1628, July 1994. [2] Rashid Ansari, Nasir Memon,The JPEG Lossless Standards, http://isis.poly.edu/memon/pdf/7.pdf [3] D.S.Sivia,Data Analysis: A Bayesian Tutorial, Clareendon Press, Oxford, 1996. [4] Deng. G, Ye. H, Maximum likelihood based framework for secondlevel adaptive prediction, IEE Proc.- Vis. Image Signal Process., Vol. 150, No. 3, pp. 193-197,June 2003. [5] L. Yin, et al, Weighted median filters: A Tutorial, IEEE Trans on Circuits and System II, v.43, issue:3, pp, 157-192,March 1996.

10

La Trobe University Melbourne, Australia

You might also like