PSNR Improvement Using Different Prediction Coding In Image Compression.

Differential Pulse Code Modulation (DPCM) is one of the predictive coding techniques. The number of previous pixels employed in the estimate operation is referred to as the order of the predictor. Predictor using one pixel for estimation is called “first order predictor”. A “second order predictor” utilizes two pixels and an “ n th order predictor” would employ n previous pixels. From the results computed in this work, by testing the prediction mean square error (MSE) using different numbers of previous picture elements. The results show that the MSE decreases significantly by using up to three pixels, and further decreases of MSE are rather small by using more than three pixels that means the performance improvement becomes negligible and only a marginal gain beyond a third-order predictor can be achieved. That means, Peak Signal to Noise Ratio (PSNR) increases significantly by increasing the predictor order, the performance improvement becomes negligible beyond third order predictor.


Introduction
The basic idea behind (LP) linear prediction is that a sample of signal can be predicted as a linear combination of previous samples.By minimizing the sum of the squared differences between input samples and linearly predicted ones, a unique set of predictor coefficients can be determined [1,2] .The DPCM compression method is a member of the family of differential encoding compression method.It is based on the well-known fact that neighboring pixels in an image are correlated, so their differences are small.In predictive coding the difference signal between the actual sample and its predicted value is quantized and transmitted.This technique has been used in speech coding, image coding, and in biomedical field [3] .
The main stages of DPCM system for still image can be illustrated in The difference signal is then: The difference will be quantized to , a and n a are called prediction weighting coefficients.The set of predictor coefficients may be fixed for all images (global prediction), or may vary from image to image (local prediction).The sum of the prediction coefficients in equation (1) normally is required to be less than or equal to one.This restriction is made to ensure that the predictor's output falls within the allowed range of gray levels.One of the possible predictors which provide satisfactory performance over wide range of actual images are given below: a is normally chosen equal to 0.95"first order predictor".The prediction coefficient 1 a and 2 a are chosen normally equal to 0.5"second order predictor" ."third order predictor". [4]scalar quantizer with 3-bit has been employed whose normalized levels distribution was given in table (1).The actual levels are calculated by multiplying the normalized levels by the standard deviation of input signal of the quantizer.
The decoder makes the same prediction, from previous decoded samples, to which the received difference signal is added to regenerate the present sample value, and so on.
Normally the dynamic range of the difference signal is much smaller than the dynamic range of input signal.Since the difference signal amplitude is modeled by Laplacian Density function, a nonuniform quantizer matched to the difference signal statistics is employed.

Quantization
Quantization is the process of approximation a continuous-amplitude signal by a discrete-amplitude signal, while minimizing a given distortion measure.Unlike sampling, quantization is an intrinsically lossy process, and after the quantization the original signal cannot be recovered without errors.In a quantized signal each sample can be represented by an index of a value selected from a finite set.The quantizer reduces the number of bits needed to store the image data by reducing the precision of those values.
There are two types of Quantization; Scalar Quantization and Vector Quantization.In scalar quantization, each input symbol is treated separately in producing the output, while in Vector Quantization the input symbols are clubbed together in groups called vectors, and processed to give the output.This clubbing of data and treating them as a single unit increases the optimality of the vector quantizer, but at the cost of increased computational complexity; In the present work scalar quantization are applied [5,6] .

Scalar Quantization (SQ)
Scalar quantization is the process of mapping one signal sample at a time.This type of quantization is the simplest, most popular and conceptually of great importance.Scalar quantization is an example of a lossy compression method, where it is easy to control the trade-off between compression ratio and the amount of loss.However, because it is so simple, its use is limited to cases where much loss can be tolerated.Scalar quantization can be used to compress images, but its performance is mediocre.
There are two types of quantizers: uniform and non-uniform quantizer.In the uniform quantizer the dynamic range of the signal is divided into equally spaced intervals where as in non-uniform quantizer the dynamic range is divided into unequally spaced intervals.
Uniform quantizers perform optimally for signal with uniform distribution.Non-uniform quantizer is specified for non-uniform distribution.It is more complex, but the added complexity is often worthwhile because it can be used to reduce the perceptual effects of the quantization.Nonuniform quantizers are typically attempt to produce digital signals with higher average SNR.
In the scalar quantization, the value of image pixel ( f ) is compared to a set of decision levels.If the pixel value ( f ) falls between two adjacent decision levels ( 1, ), it is quantized to a fixed reconstruction level ( i r ) lying in the quantization band.
The quantization problem entails specification of a set of decision levels i d and a set of reconstruction levels i r such that if: Then the pixel value ( f ) is quantized to a reconstruction level i r .Fig. 3 illustrates the decision and reconstruction levels.Table (1) shows optimum quantizer parameters for Laplacian density [7,8] .

Results and Discussion
An important concept here is the idea of measuring the average information in an image, referred to as the Entropy.The Entropy for an N*N image can be calculated: [9] Where P i = the probability of the ith gray level = n k /N 2 n k = the total number of pixels with gray value k L =the total number of gray level (e.g., 256 for 8 bit) The images used have different information content as shown in Fig. 4. Subjectively one can easily recognize that image Trees is one of the lowest information since it contains almost smooth areas.The image Moon is one of moderate information contents because it contains some smooth areas and some texture, and the image Board has high information.To support this statement, the entropy of each image is calculated and listed in table (2).
The DPCM system with first, second and third order predictor has been employed to calculate the prediction difference signal of Board image.Pixels used for prediction and the values of prediction coefficients were chosen according to equations (1-1, a, b, c) and Fig. 5.A 3-bit nonuniform quantizer with normalized level distribution given table (1) was used.The actual level distribution was obtained by multiplying the normalized level with the standard deviation of the difference signal.Fig. 5 shows the prediction difference signal for 1st, 2 nd and 3 rd order predictor respectively.From the figure one can recognize that 3 rd order predictor gives the smallest difference signal.PSNR listed in table (3) support this claim.

Conclusions
The philosophy underlying predictive coding is to use prediction to remove redundancy between inter pixels and encode only the new information.
In a general DPCM system, see figures (6,7 and8) a pixel's gray level is first predicted from the preceding reconstructed pixel's gray level values.The difference between the pixel's gray level value and the predicted value is then quantized.Finally, the quantized difference is encoded and transmitted to the receiver.
From the results obtained, one can conclude the two basic conclusions: 1.The Peak Signal to Noise Ratio (PSNR) increases significantly by increasing the predictor order, the performance improvement becomes negligible beyond third order predictor.2. The actual efficiency of the compression system depends to some extent on the original image quality.Table (1):Laplace density signals for 1,2,3,4 and 5-bits optimum quantizer parameters [8] .