Professional Documents
Culture Documents
ABSTRACT:
Analysis is performed on weighted median filters given a group of predictors. The tests were performed on different test images. The report also presents a brief explanation for choosing the proposed methods of taking weighted median of a group of predictors as an alternative and a competitive adaptive image prediction method.
1. INTRODUCTION:
Signal processing always provides challenges when it comes to approximating or predicting the next sample. The main reason being that samples have abrupt changes between each other, thus its an arduous task to predict these abrupt changes. Linear filters were introduced to counter these challenges. But Linear filters were not optimal class filters and often unable to recover the desired signal effectively if the governing distribution of the corrupting noise samples is other than Gaussian [1]. Thus weighted median (WM) filters were first introduced as generalisation standard median filters, where a non-negative integer weight is assigned to each position in the filter window and a median value is chosen using the sample and their corresponding weights.
Analysis of WM Filters
( eqn 2.1)
P = i Pi
Mean square error of the prediction is: L = E[ e2 ]
Analysis of WM Filters
Table 1. JPEG Predictors for lossless coding [2]
Where E = represents the correlation. In case of discrete variables, the above equation can be read as: L = ei2 P( ei ) For minimum error: dL / di = 0 Considering one of the predictors: (eqn 2.2)
S [ n ] = a1 S[ n 1 ] + a2 S[ n 2 ] L = ( S [ n ] - S [ n ] )2
dL/di = 0
(eqn 2.3)
( S[ n ]. S [ n 1 ] ) = (a1 S[ n 1 ] 2 + a2 S[ n 2 ]. S[ n 2 ] )
Analysis of WM Filters
For the first order predictor coefficients, i = 1; if R(x , y ) represents the correlation coefficient between two variables x and y, then R( 1 ) = a1 R( 0 ) + a2 R( 1 ) i = 2, R( 2 ) = a1 R( 1 ) + a2 R( 0 ) Representing the correlation coefficients in the form of a matrix R( 0 ) = R( 1 ) Or Rxa=r equation can also be generalised for any n > 2. R( 0 ) R( 1 ) . R( n 1) R( 1 ) R( 2 ) . R( n 2 ) R= : : R(n-1) R(n-2) : : R( 0 ) (eqn 2.5) Where R, a and r represent the above three matrices respectively.The above R( 0 ) a2 R( 2 ) R( 1 ) a1 R( 1 )
The matrix R is in the circular Toeplitz form. Assuming the Markovs model for the above equation, we have: R( k ) = | k | reduces to: max Prob ( 2 , / { S[ n ]} , I ) (eqn 2.7) (eqn 2.6) Where and are constants. Now, the optimisation problem (eqn 2.1)
Analysis of WM Filters
From the above equation (eqn 2.2), we can see that probability is a function of the constants, which means by varying the constants we can optimise the predictor. Constants are henceforth called the Weights. For a Laplacian model we have [3]: Prob ( x k | , I ) = Where
1 2 k
e | xk |/ k
(eqn 2.8)
optimised prediction is then given by the Weighted Median filter output of the predictions, using 1/ k
[4]
= WM ({Pi , i }n =1:N )
Pi = WM ({Pi , i }n =1:N ) ,
Now considering a simple case of Laplacian Distribution, we have
(eqn 2.9)
In our case, the parameter of interest is the optimised predictor, thus we have (eqn 2.10)
2 j
(if i j )
(eqn 2.11)
Thus the weighted median reduces down to a simple median; therefore the optimised prediction solution reduces to:
Pi = Median ({Pi })
(eqn 2.12)
Analysis of WM Filters
the binary domain WM filters are self dual, linear separable positive Boolean functions [5].
3.1 Definition:
3.11 Positive integer weights: For the discrete time continuos valued input vector, X = [X1, X2, X3XN], The output Y of the WM filter of width N with corresponding integer weights: W = [W1, W2, W3WN], is given by the filtering procedure [5] : Y = MED [W1*X1, W2*X2WN*XN] (eqn 3.1) Where MED is the median operation and * denotes Multiplication, The median value is chosen from the sequence of the products of the samples and their corresponding weights. 3.12 Positive non-integer weights: The weighted median of X is the value minimizing the expression [5] L( ) = 3.13 Filtering procedure: Sorting the samples inside the filter window; adding up the corresponding weights from the upper end of the sorted set until the sum just
N i =1
Wi | X i |
(eqn 3.2)
Analysis of WM Filters
exceeds half of the total sum of weights, i.e.,
1 2
N i =1
Wi ; output of the WM
(eqn 4.1)
Where xij = original pixel value and Pij = predicted pixel value. Signal to
noise ratio is calculated for the predicted image with respect to the original image. Histogram of an image can be defined by
n=
(eqn 4.2)
Where n = histogram of the image, mpv = number of image pixels with pixel value (pv) and N = total number of image pixels. Thus entropy was calculated using:
Np i =1
E=
ni log ni , log e 2
(eqn 4.3)
Analysis of WM Filters
Table 2. Entropy of the prediction errors
The weights in each case were assigned using different parameters. There are two kinds of weights assigned: global weights (entropy (Ent), variance (Var), random (rand) etc), Simple median using the number of predictors (N) and local weights (Sum of Squared Errors (SSE)). When the weights are assigned considering the whole image it is called the global weights, when SSE is used as weights it is called the local weight, because the SSE is independent for each pixel. Experimental tests were conducted on various images and the results obtained are shown in Table 2 and Table 3. Table 2,shows the entropy of the prediction errors for different images. The entropy of the prediction error is quite less when the weights are assigned using the variance parameter. When random weights were used the results for some of the images were good, but random weights cannot be taken into consideration due to the fact that the methodology of getting these random weights is a random process and the probability of getting the best results is very low. . When localised weights are used the results are also better in cases where the image was large (eg: Saturn.tif(328 x 438)). Thus by using the variance as the weights for a medium sized image or localised
Analysis of WM Filters
Table 3. Shows the signal to noise ratio (dB) of the predicted image for different images.
weights for a large image, the prediction error entropy decreases denoting that it is the best possible prediction. From Table 3, we can see that the signal to noise ratio is consistently high for predicted images derived using variance assigned to weights especially for medium sized images. Here again the usage of localised weights has shown better results in cases where the image is too large (eg: Saturn (328 x 438)). Higher signal to noise ratio indicates that the even though there is noise addition due to this prediction process, the noise rejection capacity is high; thus using variance as weights on whole image or localised weights for large image would result in better noise rejection than others.
5. CONCLUSION:
From the experimental results i.e. Table 2 and Table 3, we can see that using particular weights for prediction could result in lower prediction errors and hence resulting in better signal to noise ratio compared with the original image.
Analysis of WM Filters
In conclusion, weighted median of a group of predictors could be an alternate and a competitive method for adaptive image prediction technique. From our experiments we could suggest that variance can be used as global weights for attaining the best possible weighted median filter with better prediction and noise rejection capacity especially for medium sized images, but in cases where the image size is large, then the localised weights seem to be better than variance.
6.REFERENCES:
[1] L. Yin, Y. Neuvo, Fast adaptation and performance characteristics of FIR-WOS Hybrid filters, IEEE Trans. Signal Processing, v. 42, issue:7,pp. 1610-1628, July 1994. [2] Rashid Ansari, Nasir Memon,The JPEG Lossless Standards, http://isis.poly.edu/memon/pdf/7.pdf [3] D.S.Sivia,Data Analysis: A Bayesian Tutorial, Clareendon Press, Oxford, 1996. [4] Deng. G, Ye. H, Maximum likelihood based framework for secondlevel adaptive prediction, IEE Proc.- Vis. Image Signal Process., Vol. 150, No. 3, pp. 193-197,June 2003. [5] L. Yin, et al, Weighted median filters: A Tutorial, IEEE Trans on Circuits and System II, v.43, issue:3, pp, 157-192,March 1996.
10