You are on page 1of 9

1

A Novel Image
Compression
Algorithm Based
on Sixteen Most
Lately Used
Encoded
Codewords

Lossy image compression techniques


such
as
vector-quantization-based
methods have advantages in terms of
simple implementation and high
compression ratio. Kim has further
improved the compression rate by using
a technique called side-match vector
quantization. Due to the derailment
problem, the side match vector
quantization method needs an extra
indicator to properly proceed with the
compression operation. The search-order
coding method is an improved
compression method based on vector
quantization that takes the detailed
characteristics of the image into
consideration,
doing
lossless
compression to the VQ index table. In
this article, we propose a image
compression method which is based on
sixteen recently encoded codewords.
Experimental results show that our
method can indeed significantly improve
the compression rate comparing to some
methods proposed previously.

The search-order coding


method is an improved
compression
method
based
on
vector
quantization that takes the
detailed characteristics of
the
image
into
consideration,
doing
lossless compression to
the VQ index table. In this
article, we propose a
image
compression
method which is based on
sixteen recently encoded
codewords.

Image compression is
an important issue that
affects
the convenience of
distribution of digital
images through
computer networks and
the utilization of
memory space. SMVQ
provides a way to
improve VQ based
image compression.
Inspired by SMVQ, in
this paper, we have
offered a novel state
codebook design to help
enhance the
compression efficiency
by keeping the most
lately used codewords.
The experimental
results show that the
proposed method can
indeed improve the
compression rate.
Another important
design of our proposed
method is the guiding
table, which uses two
bits to mark the current
blocks situation. Due to
the design of the
guiding table, the
proposed method can
dramatically enhance
the compression
performance on smooth
images such as Alan
and Toy.

To improve the
algorithm
the
performance of
system accuracy
is improved.

A
flexible
content-based
approach
to
adaptive image
compression
7/06/2006

Recent research in image compression


has focused on lossy compression
algorithms. However, the baseline
implementations of such algorithms
generally use a universal quantization
process that results in poor image quality
for certain types of images, particularly
mixed-content images. This paper
addresses this image quality issue by
presenting a new algorithm that provides
flexible and customizable image quality
preservation by introducing an adaptive
thresholding and quantization process
based on content information such as
edge and texture characteristics from the
actual image. The algorithm is designed
to improve visual quality based on the
human vision system. Experimental
results from the compression of various
test images show noticeable
improvements both quantitatively and
qualitatively relative to baseline
implementations as well as other
adaptive techniques.

The main contribution of this


paper is a new content- adaptive
image quality preservation
algorithm for lossy compression
algorithms. The algorithm is
versatile and can retain detail
clarity for different types of
images, including mixed-content
images. Furthermore, it is
designed to be highly
customizable for use in specific
applications.

This paper has proposed a new


method for image quality
preservation based on the
concept of adaptive
thresholding and
quantization using image
content characteristics. The
proposed algorithm is versatile
and highly customizable for
specific domains. Experimental
results show that overall image
quality preservation is
noticeably improved over the
Q-factor approach and the
MPEG2 adaptive quantization
algorithm at the same
compression levels. It is
believed that this method can be
successfully implemented in
various digital imaging systems
such as digital cameras and
multimedia systems to produce
results with better overall visual
quality preservation when lossy
image compression is utilized.

It includes the design


and implementation
of the proposed
algorithm in hardware,
as well as an
investigation of
optimal parameters for
the algorithm in
different domains.

SAR Image
Compression
Using HVS
Model

Image
Compression
with Different
Types of
Wavelets
04/05/2006

Generally synthetic aperture radar (SAR)


image
compression methods based on wavelet
transform remove statistic redundancy
of image data and neglect visual
redundancy. In view of this problem, a
new image
compression method based on human
visual system (HVS) is proposed in this
paper. First SAR image is decomposed
by wavelet transform, then wavelet
coefficients in different
subbands are weighted by the peak of
contrast sensitivity function (CSF) curve
in wavelet domain, at last set partitioning
in hierarchical trees (SPIHT) algorithm
is used to code the weighted wavelet
coefficients to form embedded bit
stream. Compression results show that
comparing with conventional SPIHT
algorithm, the method proposed in this
paper gets better subject visual quality at
the same compression ratio with almost
equivalent objective evaluation results.
Data compression which can be lossy or
lossless is required to decrease the
storage requirement and better data
transfer rate. One of the best image
compression techniques is using wavelet
transform. It is comparatively new and
has many advantages over others.
Wavelet transform uses a large variety of
wavelets for decomposition of images.
The state of the art coding techniques
like EZW, SPIHT (set partitioning in
hierarchical trees) and EBCOT
(embedded block coding with optimized
truncation) use the wavelet transform as
basic and common step for their own
further technical advantages. The
wavelet transform results therefore have
the importance which is dependent on

The method we proposed mainly


focuses on removing visual
redundancy at the meantime
removing statistic redundancy in
SAR image compression to
increase subject quality of
reconstructed image. At first,
wavelet transform is applied to
SAR image, then each sub band is
weighted according to frequency
sensitivity in human visual system
(HVS) model and at last wavelet
coefficients are coded by SPIHT
algorithm. Experimental results
showed that at the same
compression ratio, the method
proposed in this paper increases
reconstructed image visual quality
and keeps image edge and texture
information efficiently.

This paper analyzed human


visual characteristic and
weighted wavelet coefficients in
different subbands according to
CSF function to complete SAR
image
progressive transmission and
compression. Experimental
results showed that at the same
compression ratio, this method
can achieve better subject visual
quality and efficiently keep
texture and edge information
suppressing
Speckle noise at a certain
degree. Since SPIHT algorithm
utilizes uniform quantization to
encode, the further research
may focus on utilizing variable
quantization step to high
Efficiently
compress
SAR
images.

The method proposed mainly


focused on the different
wavelets

As wavelet image compression


has
revolutionized image
compression field with
unbelievable results. This
involves the state of art
techniques but wavelet
decomposition remains the
initial step for all these
including wavelet packets
techniques. Therefore there was
a need to exploit the inherent
ability of wavelets.
The analysis was carried out
keeping in mind the fact that if
decomposition produces good
results it will also give better
chances to advanced

1.
2.
3.

Daubechies Wavelets
Biorthogonal Wavelets
Coiflets and Dmeyer
Wavelets
4. Symlets Wavelets
obtained by using different types
of wavelets for a test image in
terms of time taken by a codec
and PSNR for second level
decomposition approximation &
SPIHT technique.

There is need to carry


out study that
involves different
images and different
decomposition levels
to get more accurate
results.

Fractal Color
Image
Compression on a
Pseudo Spiral
Architecture

the type of wavelet used. In this paper,


different wavelets have been used
to perform the transform of a test image
and their results have been discussed
and analyzed. The analysis has been
carried out in terms of PSNR (peak
signal to noise ratio) obtained and time
taken for decomposition and
reconstruction. SPIHT coding algorithm
is considered as a basic standard in
compression field using wavelet
transform. In addition to wavelet
analysis for simple decomposition,
analysis of SPIHT coding algorithm
in terms of PSNR for different wavelets
is also carried out here. This analysis will
help in choosing the wavelet for
decomposition of images as per their
application.
This paper presents a new approach for
fractal color image compression on
Pseudo Spiral Architecture. Fractal
coding is a relatively recent method for
still-image compression. However,
fractal coding is used for gray level
images through rectangular domain and
range blocks. Although new approaches
have been proposed that compress gray
level and color images. The proposed
approach, firstly, determine the pixels
trichromatic coefficients within the
homogeneous blocks formed by
hierarchical partitioning method. Then,
each block is represented by its mean
value of the pixels trichromatic
coefficient ratios, and just a one-plane
image is composed. Oneplane
image in traditional square structure is
represented in Pseudo Spiral
Architecture for compression. On this
Spiral
Architecture image fractal gray level

techniques for further improved


results. SPIHT has also been
used here for this purpose.
Biorthogonal and Coiflets
wavelets have the
same type of wavelets giving
better results for both. Generally
speaking it is obvious from the
results that any wavelet giving
good results for decomposition
will produce good results for
advanced techniques being used
for image compression

In this paper, the new approach is


proposed. It is based on the RGB
color model and by hierarchically
partitioning the three color planes
into strongly correlated blocks,
which give one-color plane
image. To encode one color-plane,
it is represented in novel image
structure, Spiral Architecture,
which is inspired from anatomical
considerations of the primates
vision. On The Spiral
Architecture, an image is a
collection of hexagonal elements.
Any hexagonal pixel has only six
neighboring pixels which have the
same distance to the centre
hexagon of the seven-hexagon
unit of vision. These hexagonal
elements are identified by a
designated positive number,
called Spiral Address. To get
encoded

For the codebook formation


median calculations are used.
According to the experiments
performed, we found that due to
the use of hierarchical
partitioning the number of color
planes are minimized from three
to one and use of the Pseudo
Spiral Architecture in place of
traditional architecture, the
proposed approach is faster than
other fractal coding methods
with little effect in image
quality after decoding.
Experimental results show the
effectiveness and potential of

this approach for various color


image database processing
applications. In general fractal
coding implementation,
image is used as the two
dimensional data. In proposed
approach one-plane

A quad-tree
decomposition
approach to
cartoon image
compression

image coding algorithm is


applied, with median as the basis to form
the codebook blocks, to
get encoded image. Due to the
minimization of color planes from
three to one and Spiral Architecture, the
proposed approach is
faster than other fractal coding methods.
The use of median gives
rise to new dimension for calculation.
Experimental results show
the effectiveness and potential of this
approach for various color
image database processing applications.

image fractal gray level image


encoding algorithm is used on
Spiral Architecture image.

A quad-tree decomposition approach is


proposed for cartoon image compression
in this work. The proposed algorithm
achieves excellent coding performance
by using a unique quad-tree
decomposition and shape coding method
along with a GIF like color indexing
technique to efficiently encode large
areas of the same color, which appear in
a cartoon-type image commonly. To
reduce complexity, the input image is
partitioned into small blocks and the
quad-tree decomposition is
independently applied to each block
instead of the entire image. The LZW
entropy coding method can be performed
as a postprocessing step to further reduce
the coded file size. It is demonstrated by
experimental results that the proposed
method outperforms several well-known
lossless image compression techniques
for cartoon images that contain 256
colors or less.

the color palette is constructed


based on colors in the input image
as well as a set of training images.
This can be viewed as a preprocessing step. Second, the input
image is partitioned into small
blocks. For each block, the
number of colors is examined. If
the block has one or two colors,
no further processing is needed
and the code for this block will be
output; otherwise, the block is
subdivided until each subblock
has 2 colors or less. After
subdivision, the code
of each subblock will be output.
For a one-color block, two bytes
that represent the block and the
color information are output. For
a two-color block, in addition to
the block and color information,
the binary bit pattern that
indicates the positions of 0 and
1 in the block also needs to be
coded.

image is represented in Spiral


Architecture which gives the
one dimensional representation
due to the spiral
multiplication. So the time
complexity of proposed
approach
can be represented in the order
of n i.e. O(n). With the
implementation results it can be
seen that introducing Spiral
Architecture and hierarchical
partitioning combined into
fractal image compression gives
better result and has great
future in improving the
compression performance.
The simple color and shape
properties in a block of cartoon
image were exploited using
quad-tree decomposition and
color and shape palettes for
efficient coding. The resultant
bit stream can be further
encoded by the LZW coder as a
post-processing step. It was
demonstrated by experimental
results that the
proposed
method gives significantly
better performance than other
well known lossless image
compression methods such as
GIF, PNG, J2K-lossless.

A Novel
Interactive
Progressive
Decoding
Method for
Fractal Image
Compression

Fractal image compression is an efficient


and effective technique in image coding.
This paper presents a novel interactive
progressive fractal decoding method,
with which the compressed file can be
transmitted incrementally and
reconstructed progressively at users
side. It requires no modification to either
encoder or decoder of any fractal
image compression algorithm. In
addition, it also provides the usercontrolled decoding procedure with the
inherited fractal fast decoding feature.
The experimental results illustrated that
the proposed progressive decoding
method can be applied directly to various
fractal compression techniques and
especially favourable to applications
where the transmission bandwidth is of
great concern.

Research on the
Radar Image

In this paper, the radar image


compression of Voyage Data Recorder

The bit pattern is encoded using a


table look-up technique. A shape
palette is created to avoid
repeatedly encoding the same
bit pattern. Finally, the LZW
compression scheme is applied to
the data part to yield the final
encoded bit stream as a
postprocessing step.
In this paper, a novel progressive
decoding scheme is proposed for
fractal image compression
especially favourable to
applications where the bandwidth
is a fatal issue. In our user
interactive approach, no change is
required to the encoder and
decoder; however, based on the
application demanded fidelity, a
certain part of the compressed file
can be transmitted first so that a
preview image can be displayed
to users reconstructed from the
received data. Then the decision
of whether the further details of
the image are preferred can be
made based on the preview
image. If additional data are
received upon users request, the
decoded image can be generated
with greater fidelity to the
previous one by using the
additional data as well as the data
received in advance. As more data
received, the higher image quality
can be achieved. In brief, with the
proposed scheme, receiving code
stream in incremental steps, the
image quality in terms of PSNR
can increase progressively.

The Proposed method mainly


focused on the First, the DWT

This paper proposed a novel


interactive progressive fractal
decoding approach. It works by
sending only part of the
compressed file to generate a
preview image and suspending
the rest of transmission until
further instruction from user is
received. It has the following
advantages compared with
other progressive decoding
methods: it requires no
modification to the encoder and
decoder; users can control
the decoding procedure by
specifying the percentage of
data required; and, it inherits the
conventional fractal fast
decoding. The proposed
progressive decoding method
can be applied directly to
various fractal compression
techniques and especially
favourable to applications
where the transmission
bandwidth is of great concern.

In this paper, we code for the


radar image with two coding

The Implementation
of proposed method
to be improved in

Compression
of VDR Based on
SPIHT

(VDR) is researched. A sheet of radar


image is storaged in VDR after an
interval of time, so the compression
algorithm for radar image may be seen as
immobile image compression. The image
compression process includes Discrete
Wavelet Transform (DWT), quantization
and entropy coding. First, the DWT and
its fast Mallat algorithm is presented.
Then, the character of the image after
DWT is analyzed. The Set partitioning in
hierarchical trees (SPIHT) coder
includes the function of quantization and
entropy coding. the SPIHT coder is
explained in this paper briefly. At last,
several wavelet functions in common use
are chosen to compress the radar image,
and code for it with two coding
algorithms: embedded zero tree wavelet
(EZW) and SPIHT. The simulation
results show the SPIHT is more effective
than EZW.

and its fast Mallat algorithm is


presented. Then, the character of
the image after DWT is analyzed.
Compress the radar image, and
code for it with two coding
algorithms: embedded zerotree
wavelet (EZW) and The Set
partitioning in hierarchical trees
(SPIHT) .

Efficient
Embedded Image
Compression
Using
Optimal
Reversible
Biorthogonal
Integer Wavelet
Transform

Using the reversible biorthogonal integer


wavelet transform (RB-IWT) for
compression of image has many
advantages. For example, through the
use of appropriate techniques, a lossless
decoding image can be reconstructed.
This point is very significant for medical
and remote sensing image processing.
However, the RB-IWT has low lossy
image compression efficiency. The main
reason is the image coefficients of RBIWT based have smaller dynamic change
value and worse energy compaction than
those by the discrete wavelet transform.
In this paper, the optimal scaling factor
(OSF) is proposed for the RB-IWT. The
OSF optimizes the coefficient
distribution of the image subbands.
Experimental results show that the

In this paper, proposed a new


optimal method for the general
RB-IWT. In the new method, the
optimal scaling factor (OSF) is
the most significant for the whole
scheme. By selecting and
adjusting the scaling factor, we
can optimize the transform
coefficient distribution of each
subband. We compare the lossy
compression performance of the
presented algorithm based on the
OSF using 5/3, 13/7 and 6/14
IWTs with the original RB-IWT
without the OSF. Experimental
results show that the presented
method based on the OSF has
better lossy compression
performance than the general

algorithms: EZW and SPIHT.


The simulation results show that
the SPIHT is more effective
than EZW. The SPIHT
algorithm makes use of the two
characters of wavelet
coefficients primely: Human
Visual System (HVS) and
similar orientation. The
reconstructed image with
SPIHT has good quality and
high PSNR, at the same time, it
performs very well in terms of
compression ratio. In SPIHT
coder, the encoding can be
stopped at any stage. The
decoder then approximates the
values of the original
coefficients with precision
depending on the number of bits
coded for each coefficient. This
property is called embedded
coding.
In this paper, we have proposed
an efficient and lowcomplexity
image coding algorithm based
improved RBIWT.
The OSF scheme is adopted for
improving energy compaction
of the RB-IWT. The presented
method has three primary
advantages:
* The OSF of LS obtains the
computational complexity of
RB-IWT.
* The OSF improves the lossy
compression
performance of RB-IWT.
* The new algorithm can
support both lossless and
lossy compression using a
single bit stream efficiently.

RB-IWT is the low


efficiency for the lossy
image compression.
RB-IWT makes the
wavelet coefficients
have smaller
dynamic change value
and worse energy
compaction than
those by the DWT.

10

Lossy Color
Image
Compression
Technique using
Fractal Coding
with Different
Size of Range
and Domain
Blocks

presented method based on the OSF has


better lossy compression performance
than the general and is comparable to
the Daubechies 9/7 wavelet filter
This paper proposed the approach based
on Fractal coding which falls in the lossy
compression category. Fractal coding is
generally used for grey level images.
Proposed method firstly separate the
three planes of RGB color space images
and then the Fractal Coding algorithm is
applied
on
individual
planes
independently. Extensive experiments
are carried out with different size of
domain blocks and range blocks on wellknown color space images from
literature. Results are analyzed with
respect to the time, image compression
ratio and image reconstruction quality.
Implementation results shows that the
greater
compression ratio can be achieved with
large domain Blocks but more trade off
in image quality. Results are Compared
and represented with the performance
graph. The proposed method can be
useful for the huge database applications
where the image reconstruction quality
does not matter much. Time complexity
of the proposed method is in the order of
n2

IWT and is comparable to the


Daubechies 9/7 wavelet filter.

The implementation time of


Fractal coding is more, it
requires much more time for
encoding of an image. Time
complexity of the proposed
method is in the order of n2. In
this paper we have analyzed
the effect of the domain block
size of the compression ratio
and reconstruction quality. At
the time of implementation,
the range block size kept
constant and the domain block
size are changed. We conclude
that
1) The larger the size of
domain block, larger will be
the compression ratio with
more
deterioration
in
reconstruction quality and
2) The smaller the size of
domain block, less will be the
compression
ratio
with
perfection in reconstruction
quality.

The Implementation
of proposed method
can be enhanced by
using median as the
key
for
the
calculation
of
shrinked
domain
blocks
with
the
variation in the size
of range block.
Proposed method can
be implemented with
Neural network. The
application of the
Fractal coding gives
the higher
compression ratio
with little quality
distortion which can
be affordable for the
applications where
the picture quality
not play the
important role

You might also like