You are on page 1of 111

A Wavelet-packet based Geodesic Active Region

Model (WB-GARM) for Glandular segmentation


of histopathology images

Adnan Osmani B.S.c

Supervisor: Dr. asir M. Rajpoot

Department of Computer Science,


University of Warwick,
Coventry CV4 7AL,
UK

A thesis submitted for the award of a


Master's degree by Research in Computer Science

July 2009
Abstract

This thesis discusses wavelet-based boundary enhancement techniques for improving the

segmentation quality of contour based texture segmentation algorithms. With a focus on

improving the glandular segmentations of clinical histopathology images, a number of issues

with existing approaches are investigated before arriving at the conclusion that image and

boundary enhancement techniques play a significant role in improving image segmentation

quality.

We present a method for enhancing the visibility of region-of-interest (ROI) boundaries in

Chapter 3. This method takes advantage of the information in wavelet-packet sub-bands

overlaying wavelet feature information over a group of selected texture samples as part of a

supervised segmentation approach. This builds on the existing Geodesic Active Region model

and aims to improve the probability that a more accurate segmentation may be achieved post-

enhancement. Further insight into our algorithmic design process is also provided.

In Chapter 4, the proposed technique is validated against sets of both real world and medical

images. Experiments are demonstrated to present the improvement in segmentation quality

achieved with encouraging results being observed on both sets. Simple further adjustments are

also made to the algorithm providing additional benefits in the quality of results for the

application of glandular segmentation. The method proposed in this thesis is flexible enough to

be used in other segmentation problems, offering a computationally cheap qualitative

enhancement to their existing capabilities. It may also be powerful enough to offer real-world

solutions in the area of glandular segmentation.

i
Contents
Abstract ....................................................................................................................................... i

List of Figures ........................................................................................................................ iv

List of Tables .......................................................................................................................... vi

Acknowledgements ............................................................................................................ vii

Chapter 1 – Introduction and Objectives ................................................................... 1


1.1 Problem Description ................................................................................................................. 3
1.2 Main Contributions ................................................................................................................... 4
1.3 Thesis Organization .................................................................................................................. 4

Chapter 2 – Literature Review........................................................................................ 6


2.1.1 Edge-based segmentation....................................................................................................... 6
2.2 Region-based segmentation ...................................................................................................... 8
2.3 Texture-based segmentation ................................................................................................... 10
2.4 Hybrid segmentation methods ................................................................................................ 12
2.5 Contour-based segmentation................................................................................................... 12
2.6 Snakes: Active Contour Models ............................................................................................. 13
2.7 Level-set methods ................................................................................................................... 17
2.8 Chan-Vese Active Contour Model.......................................................................................... 20
2.9 Geodesic Active Region Model (GARM) .............................................................................. 21
2.10 Summary ............................................................................................................................... 25

Chapter 3 - Wavelet-based Geodesic Active Region Model ......................... 27


3.1 Texture descriptors.................................................................................................................. 28
3.2 The Wavelet transform ........................................................................................................... 30
3.3 The Inverse Wavelet transform ............................................................................................... 31
3.4 Wavelet Packets ...................................................................................................................... 32
3.5 The Forward Wavelet-packet transform ................................................................................. 33
3.6 Cost functions ......................................................................................................................... 34
3.7 Weaknesses of the GARM ...................................................................................................... 35
3.8 Improving the GARM ............................................................................................................. 37
3.9 A Wavelet-packet texture descriptor ...................................................................................... 38
3.10 A Pseudo-code description of the WB-GARM texture descriptor enhancement technique . 40
3.11 Generating Multi-Scale Wavelet Packet Texture Features ................................................... 40
3.12 Preparing WPF feature images for usage.............................................................................. 45
3.14 Rescaling pixel values........................................................................................................... 46

3.15 Pixel Addition ....................................................................................................................... 50


3.16 Adjustments for improved results in Medical Applications ................................................. 53
3.17 Summary ............................................................................................................................... 54

Chapter 4.................................................................................................................................. 55

ii
4.1 Results on a real-world data set .............................................................................................. 55
4.1.1 Data ...................................................................................................................................... 55
4.1.2 Analysis of real-world results .............................................................................................. 58
4.2 Glandular segmentation of histology images .......................................................................... 59
4.2.1 Background information on Colon Cancer .......................................................................... 60
4.2.2 Prior work in Glandular Segmentation ................................................................................ 61
4.2.3 Application of WB-GARM to Glandular segmentation ...................................................... 64
4.2.4 Results on Glandular Segmentation ..................................................................................... 65
4.3 Segmentation Setup ............................................................................................................... 67
4.3.1 Data ...................................................................................................................................... 67
4.3.2 Texture ................................................................................................................................. 67
4.3.3 Ground truth generation ....................................................................................................... 68
4.4 Results on images without the thresholding of lymphocytes.................................................. 69
4.5 Results on images using lymphocyte thresholding ................................................................. 71
4.5.1 Specimen 1 ........................................................................................................................... 72
4.5.2 Specimen 2 ........................................................................................................................... 77
4.5.3 Specimen 3 ........................................................................................................................... 81
4.5.4 Specimen 4 ........................................................................................................................... 84
4.5.5 Specimen 5 ........................................................................................................................... 88
4.6 Overview and discussion of results......................................................................................... 92
4.6.1 Summary of the algorithm’s performance ........................................................................... 93
4.6.2 Areas for improvement ........................................................................................................ 93
4.6.3 Summary .............................................................................................................................. 93

Chapter 5 – Thesis Summary & Conclusions ....................................................... 94


5.1 Summary ................................................................................................................................. 94
5.2 Conclusions ............................................................................................................................. 96

Bibliography .......................................................................................................................... 98

iii
List of Figures
Figure 2.1: An Example of two different Kernels .......................................................................... 7
Figure 2.2: A typical example of pixel aggregation ....................................................................... 9
Figure 2.3: An Active Contour attracted to edges ........................................................................ 14
Figure 2.4: An example of the Level set evolution of a circle ...................................................... 14
Figure 2.5: An example of ACM segmentation ............................................................................ 16
Figure 2.6: An example of more difficult ACM texture segmentation......................................... 18
Figure 3.1: Wavelet transform of the well-known ‘Lena’ image ................................................. 30
Figure 3.2: Daubechies reconstruction of the ‘Nat 2B’ image ..................................................... 29
Figure 3.3: The Wavelet-packet tree............................................................................................. 33
Figure 3.4: A cost function applied to a Wavelet-packet transform ............................................. 35
Figure 3.5: Examples of GARM Segmentation ............................................................................ 36
Figure 3.6: Example of a Wavelet-packet decomposition ............................................................ 40
Figure 3.7: FWPT Decomposition ................................................................................................ 41
Figure 3.8: IWPT Recomposition ................................................................................................. 42
Figure 3.9: FWPT of the ‘Lena’ image......................................................................................... 42
Figure 3.10: Perfect reconstruction of the ‘Lena’ image .............................................................. 43
Figure 3.11: Generating IWPF feature data ................................................................................. 44
Figure 3.12: IWPT of Scale 2, subband 2 with greater detail ....................................................... 45
Figure 3.13: Creating Feature Images ........................................................................................... 45
Figure 3.14:
(a) A synthetic Brodatz image ............................................................................... 46
(b) A selection of Wavelet packet subbands .......................................................... 46
(c) A WP feature (WPF) version of the Brodatz image ......................................... 46
Figure 3.15: An analysis of pixel value ranges ............................................................................. 48
Figure 3.16: Equation for threshold-based pixel rescaling ........................................................... 48
Figure 3.17: Example of applied pixel rescaling .......................................................................... 48
Figure 3.18: The effect of contrast-adjustment on WPF samples ................................................. 50
Figure 3.19: Visual walkthrough of proposed algorithm .............................................................. 51
Figure 4.1: ARM segmentation results ......................................................................................... 56
Figure 4.2: WB-GARM segmentation results .............................................................................. 57
Figure 4.3: Ground truth images featuring points of curvature .................................................... 57
Figure 4.4: Colon biopsy samples featuring variations in glands, size and intensity ................... 60
Figure 4.5: Regions of interest in colon biopsy samples .............................................................. 62
Figure 4.6: Visual analysis of difficulties in glandular segmentation .......................................... 62
Figure 4.7: Artefacts surrounding the glands................................................................................ 64
Figure 4.8: Glandular segmentation with lymphocyte-thresholding ............................................ 70
Figure 4.9: Specimen 1 - Hand labelling ...................................................................................... 72
Figure 4.10: Specimen 1 – Segmentation comparison.................................................................. 73
Figure 4.11: Specimen 1 – Boundary point comparison............................................................... 74
Figure 4.12: Specimen 1 – Comparison of results after contrast adjustment ............................... 76
Figure 4.13: Specimen 2 – Segmentation comparison.................................................................. 78
Figure 4.14: Specimen 2 – Boundary pointcomparison................................................................ 78
Figure 4.15: Specimen 2 – Comparison of results after contrast adjustment ............................... 79
Figure 4.16: Specimen 3 – Segmentation comparison.................................................................. 81
Figure 4.17: Specimen 3 – Boundary pointcomparison................................................................ 80
Figure 4.18: Specimen 3 – Comparison of results after contrast adjustment ............................... 83
Figure 4.19: Specimen 4 – Segmentation comparison.................................................................. 85

iv
Figure 4.20: Specimen 4 – Boundary point comparison............................................................... 86
Figure 4.21: Specimen 4 – Comparison of results after contrast adjustment ............................... 85
Figure 4.22: Specimen 5 – Segmentation comparison.................................................................. 89
Figure 4.23: Specimen 5 – Boundary pointcomparison................................................................ 90
Figure 4.24: Specimen 5 – Comparison of results after contrast adjustment ............................... 92
Figure 4.25: Percentage of correctly segmented boundary points – a distribution comparison ... 92

v
List of Tables
Table 4.1: Comparison of segmentation qualities......................................................................... 57
Table 4.2: Table of Algorithmic comparisons .............................................................................. 74
Table 4.3: Comparison table for segmentation results after contrast adjustment ......................... 77
Table 4.4: Table of Algorithmic comparisons .............................................................................. 79
Table 4.5: Comparison table for segmentation results after contrast adjustment ......................... 80
Table 4.6: Table of Algorithmic comparisons .............................................................................. 82
Table 4.7: Comparison table for segmentation results after contrast adjustment ......................... 84
Table 4.8: Table of Algorithmic comparisons .............................................................................. 86
Table 4.9: Comparison table for segmentation results after contrast adjustment ......................... 88
Table 4.10: Table of Algorithmic comparisons ............................................................................ 90
Table 4.11: Comparison table for segmentation results after contrast adjustment ....................... 92

vi
Acknowledgements

My gratitude goes to my thesis supervisor, Dr. Nasir Rajpoot, for sharing his guidance and
insightful thoughts during the development of this thesis. His kindness, devotion, encouragement
but most of all his patience were a great asset during my write-up and will always be appreciated.
I am also indebted to my mother, father and sister for being a constant source of love and a
continuous inspiration throughout my life - they have always supported my aspirations and I
would not be the man I am today without them. I thank them for all they have done for me. I
thank my graduate school for being so understanding during the course of this thesis and for the
additional time provided to getting the concepts down right. Dr.Daniel Heesch, formally of
Imperial College, has my thanks for his research papers and humour which assisted me during
some of the more difficult moments in finishing this thesis. Finally I would like to thank
Danielle for her love and the happiness and joy she brings into my life and for always
encouraging me. The support of my family and friends have helped make this thesis possible and
I would like to extend my thanks to them all.

vii
Chapter 1

Introduction

Medical imaging methods result in generating images which contain a broad range of
information about the anatomical structures being studied. This information can be used to assist
in disease diagnosis and the selection of adequate therapies and treatments. An intriguing
scenario is presented in physicians visually performing a first-hand analysis of medical images.
Here, there is a potential for observer bias and error where one physician’s visual perception of
an image may be different from that of another's [1]. Developments in medical image processing
have broadened their capabilities in recent times to be both highly sophisticated and in many
cases accurate. This contrasts with human diagnosis where such a level of certainty is not always
present. In addition to this problem, the presence of noise, variability of biological cells and
tissues, anisotropy issues with imaging systems make the automated analysis of medical images
(using both supervised and unsupervised approaches) a very difficult task which takes time to
complete.

Simplifying the ultimate goal of the analysis often restricts it to single anatomical areas (eg. the
head), single structures inside areas (eg. the brain) and single image modalities (eg. echo) to a
single type of view. Information which may be extracted about the areas to be computationally
analyzed may fall into several different categories: colour, shape, texture, position and structure
[2]. The knowledge which can be integrated into a system or method for automated analysis
typically represents a highly simplified model of the real world. This can result in certain
applications of automated methods being unreliable, slow or impractical for use under lab
conditions in hospitals. To combat this, the development of a technique should ideally be both
robust and capable of factoring in complications, artefacts and issues found in real-world image
data.

1
The computational analysis of medical images often revolves around the prior task of
segmentation and classification of specific areas inside the body [1]. It is these techniques that
allow a computer to simulate a physician with high powers of discrimination without the
downside of single-observer bias. Segmentation (and in particular texture segmentation) is of a
particular interest as it a growing field with many implications for laser assisted-surgery [3].The
ability to provide an accurate segmentation of an area of interest would mean that a surgeon or
surgical technician could greatly reduce the risk of burning more tissue than necessary,
potentially lowering the risk of complications and pain with a patient.

Image segmentation has been an ongoing challenge in the area of computer vision for several
years now and is also a fundamental step in medical image analysis. It is believed that texture is
one of the primary visual features required for segmentation as it is one of the main properties
the human visual system uses to distinguish everyday objects from each other. Although a
number of different definitions for texture exist, none of these have been proven to be adequate
and complete for all applications where it may apply [4][5].

Texture segmentation is typically composed of two primary steps; the extraction of texture
features from an image and the clustering of these features in an area to achieve a segmentation.
The extraction step is present here to map the differences in the spatially varying intensity
structures into the differences in the texture feature space. Homogeneous regions are obtained by
using a clustering method to analyze the feature space. How high in quality a classification (and
segmentation) is strongly depends on the quality of the texture features used. The quality of these
features however, is reliant on the spatial extent of the image data from which the features are
extracted. Were one able to increase the quality of the texture data extracted from an image or
enhance it’s boundaries, this may lead to a segmentation algorithm being better able to determine
where an object’s borders end.

Various methods exist for extracting textural features. These fall under the categories of
statistical, geometrical, signal and model-based approaches. While Geometrical approaches also
cover structural methods, other paradigms such as autocorrelation features comprise of statistical
methods which make use of the spatial distribution of gray-level values in an image [6]. Looking
further at the range of methods available, Wavelet Transforms and Spatial domain filtering are
also other approaches that have widely been studied.

2
1.1 Problem Description

Inaccurate segmentation is a problem that affects many areas of image processing such as
medical imaging and in particular, glandular segmentation (GS). GS is an ongoing challenge
which spans across many areas of medical histopathology including the study of prostate images
[7]. In several cases, the isolation of a particular area of a slide for further study is of critical
importance in making an early prognosis of the disease – such as in the diagnosis of colon
cancer. In clinical histopathology, a significant quantity of inter and intra-observational variation
in the judgement of specimens can lead to inaccurate or inconsistent manual segmentations of
regions of interest. This deficiency of a single accurate observation for pathologists highlights an
area where computational analysis can aid in providing a reliable segmentation.

Whilst many studies have looked at the problem of segmenting the histopathological images
used in the diagnosis of colorectal cancer [8][9], few have been able to adequately address the
issue of segmentation accuracy. One of the main challenges to address in GS is boundary
segmentation where the accurate segmentation of lumen (the interior part of the cell) from the
darker nuclei on its boundaries is the primary task any computational solution must address
effectively. Computational estimation of lumen boundaries can at times be a difficult task due to
the low differences in contrast between the lumen and material which surround the outer walls of
the gland. This closeness in intensity values makes accurate segmentation of lumen a far greater
challenge, but does highlight that GS is an area where improvements in the quality of a final
segmentation could be critical to aiding a pathologist or laser-guided surgeon in saving a
patient’s life.

Examining signal processing in greater detail, feature extraction can be viewed as a problem
composed of two key stages: a signal decorrelation step and a computation of the feature metric
which is often a probability measure [10]. Wavelet Analysis of an image can be viewed in the
frequency domain as partitioning it into a set of sub-bands. The Discrete Wavelet Transform
(DWT) offers a multi-resolution representation of an image. Transient events in the data are also
preserved by this analysis. Whilst the DWT applies a wavelet transform step to just the low pass
result, the Wavelet Packet Transform [11] applies this step to both the low pass and high pass
results which pave the way forward for obtaining a wider range of texture features from an image

3
than are currently being harnessed or utilized as part of segmentation approaches such as the
Geodesic Active Region model (GARM) – a framework designed to work with frame partition
problems based on curve-propagation under the influence of boundary and region based forces.

1.2 Main Contributions

This thesis proposes a new texture descriptor for texture segmentation models which utilizes
wavelet packet texture features (WPF) with combined pixel-addition of the source as part of an
ROI boundary enhancement routine. The primary enhancements made in this routine are an
increase in the visible edge and boundary artefacts which surround the main objects in an input
image, allowing segmentation approaches to have a clearer understanding of where the true
boundaries of an ROI lie. The result of these enhancements is a contour based texture
segmentation algorithm which may offer improved segmentation results on both real-world and
medical images, as will be discussed in Chapters 3 and 4. For the purposes of performance
evaluation and demonstration, the proposed multi-scale enhancement routine is directly
integrated into the Geodesic Active Region model (GARM) [49] such that the input to the
GARM spectrum analyzers is the Gabor response to a WPF texture sample summed with a
source texture sample. With respect to the particular application the proposed method is found to
be useful. The proposed solution is capable of enhancing the clarity of boundaries surrounding
the lumen in glands such that a texture descriptor is more accurately capable of representing
these boundaries. This effectively results in a segmentation model being better able to correctly
segment the objects that lie inside them and a significantly more accurate final segmentation.

1.3 Thesis Organization

Chapter 1 is the introduction to this thesis and provides a summary of the background information to
it. The problem description and the thesis organization are also provided here.
Chapter 2 examines current and past literature in the field of texture segmentation with references to
some of the popular models that have consistently provided a certain level of accuracy in this field.
Chapter 3 introduces the newly proposed texture descriptor with specific references to wavelet packet
texture features and pixel addition for improved ROI boundaries during segmentation. The
methodology behind this method is discussed here as well.

4
Chapter 4 provides the results of evaluating the proposed texture descriptor against real world and
medical images with specific focus on its application to glandular segmentation in histopathology.
Chapter 5 states the conclusions drawn from this thesis and suggests possible directions for future
research

5
Chapter 2

Literature review

Introduction

Two fundamental techniques that have been employed in image segmentation for many years
have been edge and region based. Edge-based segmentation partitions images by locating
discontinuations or breaks in consistency among regions inside the image area. In contrast,
region based methods apply a similar function which is based on the uniformity of a particular
property within a sub-window. In section 2.1, a brief introduction to these two types of
segmentation methods is presented.

2.1 Edge-based segmentation

Edge-based segmentation, one of the oldest forms of segmentation, accounts for a large group of
techniques based on information about the edges in an image. This approach searches for
discontinuities in intensity which assists in highlighting object boundaries. Some researchers
may argue that rather than following the conventional meaning of the term "segmentation", this
particular approach may be more appropriately considered a form of boundary detection [12].
The Oxford English dictionary defines an edge as the line along which two surfaces meet. For
the purposes of our problem, this can be considered as a distinct boundary between two regions
who have their own discrete characteristics. Traditional edge-based segmentation takes an overly
simplistic view of image homogeneity. Under this assumption, it is assumed that every region is
adequately uniform such that the borders that separate them may be determined using
discontinuity metrics alone. This flawed view is the basis for many improved models being
introduced over time including some of the algorithms that will be discussed in the next section.
Many edge-based segmentation approaches rely upon the concept of a convolution filter. Image

6
convolution is an image processing operation where each destination pixel is calculated based on
a weighted sum of a set of nearby source pixels. As an example, one may label the pixels in an
image as a one-dimensional array. Allowing the n-th destination pixel to have the value bn , the

p-th source pixel to have the value a p and the digital filter F to have a set of non-zero vales Fm

for a set {m} where typically the filter Fm is normalized such that Sum{m} Fm =1. The filter Fm

works as follows:

Bm = Sum{m} Fm am – n for each destination pixel n. (2.1)

The sum over the m terms in the convolution is the inner loop of the computation. The order of
the indices on the right hand side of this equation is a convention for the convolution. If the
images are of the form NxM pixels and the number of non-zero elements in the filter F is s, then
the convolution needs NMs multiplications and additions to be calculated. Local derivative
operations can then be performed by convolving the image with a variety of different kernels.
Both Sobel and Canny edge detectors are both widely used kernels in Computer Vision.

The Sobel operator [13][14] is an edge detection operation which calculates the gradient of an
image's intensity at each point, providing the direction of the largest potential increases from
light to dark and the rate of change in that particular direction. The result displays how smoothly
the given images changes at that point and thus, how likely it is that that part of the image
represents an edge. It also displays how that edge is likely to be oriented. The operator consists
of 3x3 convolution kernels (one is effectively the other rotated by 90 degrees). These kernels can
be applied separately to an input image in order to produce separate measurements of the
gradient component in each direction.

(a) (b)

Figure 2.1: An example of two different kernels (a) An example of a vertical gradient kernel, (b)
example of a vertical Sobel kernel

7
Although Sobel is very useful for simple thresholding, Canny combines thresholding with
contour following to reduce the probability of false contours and will be discussed next.

The Canny edge detection algorithm is considered by many as the most rigorous edge detector.
Canny intended for his approach to improve on several of the well established methods of edge
detection when he began his work. The first criterion of the algorithm is its low error rate. The
second is that the distance between edge pixels as discovered by the detector and the actual edge
is to be at a minimum. The third criterion was that it was to only have one response to a single
edge. This was added as a requirement as the first steps were not substantial enough to fully
eliminate the possibility of multiple responses to an edge. Using these criteria, the Canny edge
detector smoothes the image being processed to eliminate the noise. It then discovers the image
gradient to highlight regions which have high spatial derivatives. The algorithm tracks along
these regions, suppressing any pixel which is not at the maximum. The gradient array is next
further reduced through the process of hysteresis - this tracks along the pixels which have not yet
been suppressed. This technique uses two separate thresholds and if if a magnitude is below the
first threshold, it is set to zero. If above the high threshold, it is made an edge.

Of the two approaches above, one would generally opt for Canny as it performs additional
processing and non-maximum suppression during processing which can eliminates the
possibility of wide ridges often seen with Sobel. Edge-based segmentation by gradient operations
achieves reasonably good results in images which have clearly defined borders, non-
heterogeneous intensity profiles and low noise. As a result of the latter limitation, pre-processing
operations such as smoothing would have been considered a pre-requisite to using this method
were it not destructive to the sensitive edge information. This point aside, there are actually
certain benefits to using edge-based segmentation. For instance, as it is not a computationally
expensive operation, it can be completed much faster than most modern approaches. It can be
implemented as part of a local convolution filter making it relatively easy to integrate into other
applications.

2.2 Region-based segmentation

Region-based segmentation aims to partition regions or sub-windows based on common image


properties such as: intensity (either original or post-processed), colour, textures unique to each

8
region and spectral profiles which provide additional multi-dimensional
dimensional image data [15
[15].
]. These
may sound familiar as they are also encountered in texture class
classification
ification approaches, a subject
area which is parallel to image segmentation. Region growing is an aggregation concept which
exploits the fact that pixels which lie closely packed together have similar pixel intensity and
grayscale values [16].
[16 A demonstration
demonstration of this can be seen in Figure 2.2
.2.. Region growing works
as follows:

1. An initial group of small areas is iteratively merged based on a loosely def


defined
ined set of
similarity criteria.
2. A set of seed points or seed pixels is then selected and used for comparison to other
neighbouring pixels.
pixels
3. Regions grow from these seeds by appending neighbouring pixels which are considered
similar.
4. If a single region stops growing, another seed is chosen which has not yet been assigned
ownership to any other region
region and the process is started again.

(a) (b)

Figure 2.2: A typical example


examp of pixel
ixel aggregation. In (a)
(a) we can see a se
sett of seeds underlined
and in (b)
(b the resulting segmentation.

Although region growing is a simple concept, there are a number of significant problems which
arise when integrating it for use in applications. Allowing a particular seed to grow in it
itss entirety
before allowing other seeds to proceed
proceed creates a biased and inaccurate segmentation in favour of
the earliest regions that are segmented. The disadvantages of this may include ambiguities at the
edges of neighbouring regions which may not be possible to resolve correctly. Another issue tha
that
is encountered when incorrectly using this approach is that different selections of seed pixels
may give rise to very different segmentation results [17].

9
The advent of further research into region growing has led to the creation of more sophisticated
segmentation methods which utilize additional information to increase the accuracy of the
approach. Region competition [30] is one of them. This algorithm minimises a strong Bayes
crtieria using a variational principle and brings together the best features of both snake models
and region growing. By merging nearby regions under a criterion of region-uniformity, it is
possible to achieve over-segmented results whilst the opposite leads to quite poor segmentations.

Parametric models are yet another region-based segmentation method based on the paradigm of
uniformity. Here, if two-regions contain similar values within a threshold they may be
considered uniform. It is common for such parameter values to be obtained from image analysis,
observation data or knowledge of the imaging process. Such deductions are often made by use of
conditional probability density functions (PDF's) and Bayes rule [18].

One of the constraints of estimation-based segmentation is the lack of explicit representation


when dealing with the obvious uncertainly of parameter values. This makes them prone to errors
if the estimation of parameters is poor. Returning to Bayes, the probability of region
homogeneity exploits the complete set of information extracted from the statistical image models
rather than relying on an estimation of parameter values. Today there exist statistical
segmentation methods which are based on both estimation and Bayesian approaches expanded to
several models including the Active Contour Model and the Active Region Model. Both of these
approaches will be discussed further in Chapter 3.

2.3 Texture-based segmentation

Whilst no mathematical definition exists for texture, it is often attributed to human perception as
the appearance or feel of a particular material or fabric. For example, the arrangement of threads
in a textile. If this concept of "threads" are applied to "pixels" a similar definition could be
considered for the pixels in an image. In image processing, groups of pixels may be labelled
according to a particular application ; for example, a group of pixels exhibiting green colours
which appear in a column structure could be labelled as exhibiting a "grass" texture. Using the
concept of human perception once more one may explain the segmentation of textures using it as
an analogy. When one views an object, a type of local spatial frequency analysis is performed on
the image observed by the retina and this analysis is carried out by a bank of band-pass filters

10
which allows one to distinguish characteristics about the image such as it's different textures
[21].

The segmentation of textures has long been an important task in image processing. Texture
segmentation techniques aim to segment an image into homogeneous regions and identify the
boundaries which separate regions with different textures. Efficient texture segmentation
methods can be very useful in computer vision applications such as the analysis of biomedical
images, aerial images and also in the study of aerial imaging. Several texture segmentation
schemes are based on filter bank models, where a collection of filters, known as Gabor filters,
are derived from Gabor functions.

The goal of employing a filter bank is to transform differences in texture into filter-output
discontinuities at texture boundaries which can be detected. By locating such discontinuities, one
may segment the image into differently textured regions. Distinct discontinuities, however, only
occur if the Gabor filter parameters are well chosen. Segmenting an image containing textures is
typically completed in two core stages. The first stage involves decomposing an image into a
spatial-frequency representation (using a band of digital band-pass filters such as a Gabor filter).
The second stage is analysing this data to find regions of similar local spatial frequency. This
makes it possible for an algorithm to find multiple textures in a digital image. There have been
many studies done in the area of multi-channel filtering, particularly in the wavelet domain [22-
25].

One of the most essential choices to be made when exploring this problem domain is between
supervised texture segmentation and unsupervised texture segmentation. The main difference
between the two is in the prior knowledge available about the specific problem being addressed.
If one can establish that the image contains only a small set of different, distinct textures that we
may delineate small regions of homogeneous texture, extract feature vectors from them using a
chosen segmentation algorithm and utilize these vectors as fixed points in the feature space. The
vectors can then be labelled by assigned them the label of their closest neighbour which is a
fixed point.

Through neural networks or some other machine learning approach, the system may then be told
when it makes mistakes and it can adjust its segmentation model accordingly. If, however, the
number of potential textures is deemed to be too large, or if no information about the type of

11
texture to be presented to the system is available, then an unsupervised method must be used.
With this method, before each feature may be assigned to a class as generated, statistical analysis
must be performed on the entire distribution of vectors. The goal of this is to recognize clusters
that are in the distribution and assign the same label to all of them. This is usually a much harder
task to be accomplished.

In this thesis, we will be using supervised segmentation for our chosen approach as a portion of
the work presented builds upon the Geodesic Active Region model (a supervised segmentation
algorithm).

2.4 Hybrid segmentation methods

Some approaches have attempted to integrate both region and edge based segmentation
approaches [26-28] whilst others have even tried fusing region and contour segmentation using
watersheds [29]. The combination of two or more different algorithms have also produced some
interesting results [30][31] - this is an encouraging development that I will be exploiting in my
own proposed method to be introducing in a later chapter.

2.5 Contour-based segmentation

Over the past decade, extensive studies have been conducted on the curve evolution of snakes
and their applications to computer vision. In comparison to some of the other methods available
today such as region-growing and edge-flow methods, the active contour model has maintained a
position of favourable note due to its lack of strong sensitivity to smoothed edges and
discontinuities in contour lines. The original concept of a snake was first introduced in early
1988 [32] and was later advanced by a number of researchers. It can be described as the
deformation of an initial contour towards an object's boundary by minimization of a function R,
defined such that the minimum of R is achieved at the object's boundary. This minimization
where the overall smoothness of the curve is controlled by one set of components and the
attraction force pulling the curve towards the boundary is controlled by another is quite atypical
of the approaches researched. There are two primary types of active contour model - geometric
[33][34][35] and parametric [36].

12
The parametric active contour models [36][38] are part of a class of conventional snake models
where a curve is explicitly represented by a group of curve points which are moved by an energy
function. This approach is considered significantly more ingenious as its mathematical
formulation enables it to be a more powerful image segmentation paradigm than its implicit
alternative. The parametric active contour model offers simpler integration of image data, desired
curve properties and domain-related constraints within a single process. Although this places it at
an advantage, the parametric model does suffer from limitations such as not being able to handle
non-complex topologies. This limits its effective usage, however work has been done in easing
these conditions to make it more broadly appealing [38].

The geometric models consist of embedding a snake as a zero-level set of a higher dimensional
function and solving the related equation of motion rather than computing curves.
Methodologies such as this are best suited to the segmentation of objects with complex shapes
and unknown topologies [37]. Unfortunately, as a result of higher dimensional formulation,
geometric contour models are not as convenient as parametric models in applications such as
shape analysis and user interaction.

2.6 Snakes: Active Contour Models

Active Contour models (Snakes) have been used in the past in computer vision problems related
to image segmentation and understanding. They are a special case of the deformable model
theory [39] which are analogous to mechanical systems where a force of influence may be
measured using potential and kinetic energy. An active contour model is defined as an energy
minimizing spline where the snake’s energy is dependent upon its shape and location within the
image. Local minimization of this energy then corresponds to desired image properties. Snakes
do not solve the problem of discovering contours inside images, but instead, depend on other
mechanisms such as interaction with a user or information from image data to assist it in
achieving a segmentation. This user interaction must state some approximate shape and starting
position for the snake to begin (ideally somewhere near the desired contour). Prior knowledge is
then employed to push the snake towards an acceptable solution. In Figures 2.3 and 2.4 one may
see examples of one method which may be used to provide a good starting point (priori) for the
active contour model – a binarization of the original source image. This method is very useful for

13
images containing a small set of textures but does not perform as well on those containing many
(an example of this is may be viewed in Figure 2.6 (a)).

(a) (b) (c) (d)

Figure 2.3: An example of Active Contour Model segmentation. 2.5 (a),(c) Two sets of ACM
segmentation results compared to the binary mask initializers (Figure 2.5 (b)(d)) used to achieve
these outputs.

(a) (b) (c)

Figure 2.4: An example of more difficult texture segmentation using the ACM. (a) ACM
segmentation of an image with a wide variety of individually distinct textures, and (b) its binary.
(c) The ACM results on a synthetically generated image.

14
From a geometric perspective, snakes are a parametric contour which are assumed to be closed
and embedded in a domain. With this in mind, a snake may be represented as a time varying
parametric contour. Parametrically, a simplified non-time varying ACM snake may be defined
by:
v ( s ) = ( x ( s ), y ( s )) (2.2)

where s ∈ [0,1] is the arc length and x(s),y(s) are x and y co-ordinates along the contour. Next,
the energy for the contour may be expressed by:

∫E snake ds = ∫ Eint ernal . (v) ds + ∫ Eexternal ((v) ds ) + ∫ Eimage (v)ds (2.3)

Here, Einternal corresponds to the internal energy of the contour and Eexternal to the external energy.
The internal forces arise from the shape and discontinuities in the snake whilst the external
forces are based on the image interface or higher level understanding process. Eimage is the
image's energy which represents lines, edges and termination terms.

The internal contour’s energy is composed of the first order differential vs = dv/ds which is
controlled by α ( s) and the second order differential vss= d2v/ds2 , which is controlled by β(s). In
extended form, this is expressed as follows:

2 2
dv d 2v
∫E int ernal . =α ( s )
ds
+ β ( s ) 2 ds
ds
(2.4)

where α ( s) , β(s) specify the elasticity and stiffness of the contour snake.

The purpose of the internal energy Einternal is to force a shape on the deformable snake and ensure
that a constant distance is maintained between nodes in the contour. With this in mind, the first
order term adjusts the elasticity of the snake and the second-order curvature is responsible for
making an active contour shrink or grow. Visually, if there are no other influences acting, the
continuity energy term pushes an open contour into a straight line and a closed contour into a
circle.

15
A variety of functionals (or metrics) can be used to attract the snake to different artefacts in the
image. Let us take for example a line functional and an edge functional. As described by Kass et
al. in [32] a line functional can be expressed as simply as:

Eline = f ( x , y ) (2.5)

where x,y are coordinates in an image I and f(x,y) is a function which denotes the gray levels at
the location (x,y). The most simple useful image functional based on this is image intensity
where f is substituted for I. In this case, the snake will either attempt to align itself with the
lightest or darkest nearby contour.

An edge-based functional would attract the contour to areas with strong edges and can be
expressed as:

Eedge = grad ( f ( x, y ) 2 ) (2.6)

(a) (b) (c)

Figure 2.5: An Active Contour attracted to edges. (a) An illustration of the target area. Here the
shape of the snake contour between the edges in the illusion is completely determined by a spline
smoothness term [32] (b) A termination snake attracted to the edges and lines in equilibrium on
the subjective contour (extended from Kass et al.)[32][24] (c) An initialization of the ACM.

16
The Active Contour Model uses minimisation of the energy function as a means to achieving
edge detection of objects. The final snake (a contour of the object of interest) is however highly
dependent on its initial starting position and starts from a path close to the solution and converge
to a local minimum of the energy, ideally as close to the expected object boundaries as possible.
There are several possibilities for where a convergence may occur as can be seen above in Figure
2.13 (c). Here, the curve a is outside the object, the curve b overlaps and the curve c is
perpendicular to it.

2.7 Level-set methods

Osher and Sethian [33] proposed a new concept for implementing active contours known as the
Level set theory . Level set methods, rather than following an interface take an original curve and
build it into an isosurface of a function. The produced evolution is then mapped into an evolution
of the level set function itself. Using [33], Osher and Sethian were able to harness the power of a
two-dimensional Lipschitz function, ɸ(x,y) : Ω → ℝ in order to represent a contour implicitly.
The term ɸ(x,y) is referred to as a level set function. On the zero level of this function the level
set function is defined as a contour c such that:

C = {( x, y) : φ ( x, y) = 0} ∀( x, y) ∈Ω (2.7)

where Ω denotes the complete image plane. As the level set function increases from its initial
stage the correlated set of contours C propagate towards the outside. Based on this definition, the
contour evolution is equal to the evolution of the level set function.

∂C dφ ( x, y)
=
∂t dt
(2.8)

The primary advantage of using the zero level is that a contour may be expressed as the
boundary or border that lies between a positive area and a negative area, such that the contours
may be explicitly identified by simply checking the sign value of the level set function ɸ(x,y).

17
Contour deformation is typically represented in the form of a PDE. Osher and Sethian [33]
originally proposed a formulation of contour evolution which used the magnitude of the level set
gradient given by:
d φ ( x, y )
= ∇φ v + cκ (φ )
dt
(2.
(2.9)

Here, v signifies a constant speed to deform the contour and κ expresses the mean curvature of
the level set function ɸ(x,y).
ɸ(x,y)

2.6 An example
Figure 2.6: le of the level set evolution of a circle (hard line) with normal speed F.
For a contour C0, the initial level set function ɸ is zero at the initial contour points given by
ɸ0(x,y) = 0, (x,y)= C0.

When applying level sets to image segmentation, we seek to detect the boundaries of an object in
the image we wish to segment. This is achieved by initializing an interface at a position in the
image and then changing it by allowing appropriate forces to act on it until the correct
boundaries in the image are found.
found. Level set methods differ from other front
front-tracking
tracking techniques
as they make use of an implicit representation of the interface. Level sets are an intuitive idea
because many complications such as breaking and merging are easily handled by the method
with
ith support for both two and three dimensional surfaces.

18
We now return to the paradigm of edge-based segmentation. Contour based methods which make
use of edge information typically involve two key parts: regularity, which determines a contours
shape and the edge detection factor, which attracts contours towards the edges.

Solving the classical problem edge-based segmentation using snakes amounts to finding for a set
of constants, a curve C that minimizes the energy associated with this curve. Considering an
image which contains multiple objects, it is not possible to detect all of the objects present
because these approaches cannot directly deal with changes in topology. Topology-handling
routines must thus be incorporated to assist in making this possible. Classic energy-based models
also require the selection of parameters which control the trade-off between smoothness and
proximity to the object.

Caselles et al addressed both these issues in [35] using their Geodesic Active Contour Model
(GACM). The Geodesic Active Contour Model is based on active contours evolving in time
according to intrinsic geometric measures of the an input image. Evolving contours split and
merge naturally which allows the simultaneous detection of multiple objects and both interior
and exterior boundaries. In addition to this, the GACM applies a regularization effect from
curvature based curve flows which allow it to achieve smooth curves without a need for the high
order smoothness found in energy-based approaches.

The GACM was one of the first active contour approaches to utilize level-sets to approach the
problem of image segmentation. By embedding the evolution of the curve C inside that of a
level-set formulation, topological changes are handled automatically and the accuracy and
stability of these are achieved using a proper numerical algorithm.

A stopping function g(I) is also employed by the GACM for the purpose of stopping an evolving
curve when it arrives at the objects boundaries (as can be seen in Equation 2.14).

1
g (I ) = p
1 + ∇Iˆ

(2.10)

19
Where Iˆ is a smoothed version of the image I computed using Gaussian filtering and p=1 or 2.
For an ideal edge, ∇Iˆ = δ , g = 0 and the curve stops at ut = 0 . One can then find the boundary
given by the set u = 0 .

Although edge-based approaches such as the GACM work acceptably on simple segmentation
problems, their lack of support for topological changes makes them inadequate for images which
contain more than one object. Edge-based models are also susceptible to missing out on smooth
or unclearly defined boundaries and are also sensitive to noisy data. Region-based approaches,
however have specific advantages over edge-based techniques; these include the ability to
produce coherent regions which link together edges and gaps produced by missing pixels and
much better topological handling for images containing noise.

2.8 Chan-Vese Active Contour Model

Chan and Vese proposed a piecewise constant active contour model employed the Mumford-
Shah segmentation model to extend the original algorithm [46][47]. Rather than searching edges,
piecewise-constant ACMs deform a contour based on the minimization of an energy function.
Constants approximate statistics of the image intensity within a particular subset whilst
piecewise-constants approximate similar measures across the entire area of an image.

As many classical snakes and active contour models rely on the edge-function , depending on the
image gradient ∇u0 , to stop the curve evolution, these models can detect only objects with

edges defined by gradient. In practice, the discrete gradients are bounded and then the stopping
function g is never zero on the edges and the curve may pass through the boundary.
The Chan-Vese approach uses the minimization of an energy-based segmentation which employs
a stopping term based on the Mumford Shah segmentation techniques. By doing this, they obtain
a model which may detect contours both with or without gradient for instance objects with very
smooth boundaries or even discontinuous boundaries.

Assuming the image u0 is formed by two regions of approximately piecewise constant intensities

of distinct values u10 and u00 and that the object to be detected is represented by the region with

the value u0i .Next, denote its boundary by C0 , where u0 ≈ u0i denotes being inside the object and

u0 ≈ u00 denotes being outside it. Considering the following fitting term:

20
2 2
F1 (C ) + F2 (C ) = ∫ u0 ( x, y ) − c1 dxdy + ∫ u0 ( x, y ) − c2 dxdy
inside ( C ) outside ( C )
(2.11)

Where C is any other variable curve and the constant c1 , c2 depending on C, are the averages of
u0 inside C and respectively outside C. In this simple case, it is obvious that the boundary of the
object is the minimizer of the fitting term:

inf
{F1 (C ) + F2 (C )} ≈ 0 ≈ F1 (C0 ) + F2 (C0 )
C (2.12)

Here, if the curve C is found outside the object, F1 (C)>0 and F2 (C ) ≈ 0 . If the curve C is inside
the object, F1 (C ) ≈ 0 and F2 (C ) > 0 and if C is located both inside and outside the object
F1 (C ) > 0 and Fs (C ) > 0 .

The region partitioning achieved after processing has completed may be expressed as a group of
piecewise-constants. The Chan-Vese algorithm has been demonstrated to reach the quickest
convergence speed among extended active contour approaches as a result of its approach to
simplistic representation. Experiments on the Chan-Vese ACM without edges, the Casselles
geometric ACM and measuring the number of iterations required for a segmentation algorithm to
converge concluded in the Chan-Vese was able to converge between 200-900 iterations in under
120 seconds without heavy memory usage. The region partitioning achieved after processing has
completed may be expressed as a group of piecewise-constants. The Chan-Vese algorithm has
been demonstrated to reach the quickest convergence speed among extended active contour
approaches as a result of its approach to simplistic representation.

2.9 Geodesic Active Region Model (GARM)

Image segmentation is an area that is constantly evolving. As previously discussed, some of the
earliest techniques for boundary based frame partition made use of simple local filtering methods
such as edge or boundary detection. In the last section, the Active Contour model (ACM) - a
widely-used frame partitioning approach which coupled traditional snake based methods with the
level set theory was discussed. Although promising, the active contour model was not
specifically created for the purposes of complex texture segmentation. It’s lack of good support

21
for topological changes meant that its ability to segment images containing multiple objects was
very limited.

More recently, a new paradigm called active regions was introduced as a means to combing both
region and boundary information. Work in this area has resulted in interesting work by
Chakraborty et al. [48] and the proposal of the Geodesic Active Region model (GARM) by
Paragios and Deriche [49]. This model is a substantial extension of the active contour model due
to its incorporation of region-based information to assist in the location of partitions where both
interior and exterior areas of a region preserve desirable image properties.

The active region model combines boundary and region based frame partitioning under a curve
based energy framework which attempts to find the minimal length curves which preserve
regularity, attraction to object-of-interest boundary points and generate optimal partitions based
on region properties of different hypotheses. The set of initial curves produced by the model
propagate towards a best partition under the influence of both the boundary and region based
forces which are constrained by a regularity force.

Statistical analysis based on the minimum description length and maximum likelihood principles
determine the number of sub-regions (and the PDF for these regions) by using a variety of
Gaussian filters. The probability of a region is then estimated from the PDF using a priori
knowledge as part of a supervised segmentation problem. By using a probabilistic edge
detection, information about the boundaries may be determined which is established from the
regional probabilities of the neighbourhood. It is easy to visualise this probability as the
likelihood of an image pixel lying on an edge if it’s neighbourhood pixels on either side both
have high regional probabilities for different classes[25][29].

The image input to the GARM is considered to be composed of two primary classes ( ha , hb ).As
this is a supervised segmentation approach which relies on prior knowledge, it can be assumed
that some additional information about these two classes (namely texture samples from each) are
available. The task of discovering the optimal partition using the GARM is equivalent to
accurately extracting the boundaries between the two regions Ra and Rb . This may be achieved
using the Geodesic Active Contour model (which has been previously discussed). One thus seeks
to minimize the equation:

22

1  i
 
E (∂R ) = ∫ g pc ( I (∂R(c))) ∂ R (c) dc
    (2.13)
0


boundary − probability  
  regularity
boundary − attraction

Where ∂R is a parametized version of the partition boundaries in planar form, the density
function pc measures the likelihood of a given pixel being at the boundaries and g is a positive,
decreasing function with minimal values at the locations in the image containing ones desired
features. The Visual properties of the classes ( ha , hb ) are additional cues for performing

segmentation with the overall aim being to discover a consistent frame partition between the
observed data, associated hypothesis and the expected properties of these hypotheses.

As the active region model considers both boundary and region forces at the same time, we can
also consider an equivalent region problem being the creation of a consistent partition between
two terms – the observed data, the associated hypotheses and also their expected properties. This
particular partition may be viewed as the problem of optimising the posterior frame partition
probability which in respect to partitions P(R) would be represented by a density function as
follows:

p( I | P( R))
ps ( P ( R ) | I ) = p( P( R)) (2.14)
p( I )

where I is the source image, p(I) is the probability of I being in the set of all possible images,
p(P(R)) is P(R)'s probability in the set of all possible partitions and ps(I |P(R)) is the posterior
frame partition density function (ie. the posterior segmentation probability for I, given P(R)). The
minimal form of this representation after constants and other redundant terms have been
removed [57] is the posterior segmentation probability for a partition P(R) such that:

ps ( P ( R ) | I ) = ∏ S ∈RA p A ( I ( s ))∏ S ∈RB pB ( I ( s )) (2.15)

23
Where pA and pB are region probabilities which measure the overall likelihood of a pixel

preserving its expected region properties and Ps ( P( R) | I ) is the posterior segmentation

probability for the image I, when given the partition P ( R ) .

The level-set equations which drive the curve propagation for the GARM may then be expressed
as:

 
α ∑ ω j log 
{ }
 p t R , j ( I j (u ) 
0
+



φ (u ) = 
 j =1 
 {
p t Rk }
, j ( I j ( u ) 


 ∇ φ (u ) ) (2.16)
∂t   
∇ φ (u )
 (1 − α )  ( g ( p B (u )Κ (u ) + ∇ g ( p B (u )).  
  ∇ φ (u )  

Here:
• u = (x,y) and is a point on the initial curve in either region R0 or Rk

• Ij(u) specifies the jth band of the Image I(u)


• p {tRn , j} ( I j (u ) represents the regional probability denoting the probability that a pixel

Ij(u) is a member of the sub-region Rn

• pB (u) specifies the probabilistic edge detection operator expressing the probability that a
boundary pixel y found at u
• g(pe) represents a positive and decreasing function of this probability. The regional
probability is then calculated from each band and added.

When provided with an initial curve, the PDA in equation 2.19 creates a partition of the image -
which is determined by a curve which attracts the region boundaries - where the exterior curve
region corresponds to the background pattern in the image and the interior corresponds to the
other patterns. Although this equation could have been implemented using a Larangian
approach, that decision would have greatly limit it's capabilities as it would be unable to deal
with changes in topology of the moving front. Instead, by harnessing the work of Osher and
Sethian [22], Paragios and Deriche [49] were able to represent the moving front as the zero-level
set of a function φ , making the representation topology free . The minimization of the GARM's

24
objective function is essentially then the steady-state solution of the above equation where
geometric properties are estimated directly from the level set frame.

Building on from this, one of the problems with the original Geodesic Active Contour approach
[32] was it was not originally defined for the problem of texture segmentation. This was
addressed by the GARM [50] which was extended to solve texture-based segmentation through
greater support for changes in topology (as discussed above) and consideration of both boundary
and region information. The GARM’s approach to the problem of texture segmentation is to
implement Gabor features which have the power to discriminate textured surfaces based on their
orientation, scale or the mean of the magnitude. Although this results in a highly capable texture
segmentation approach, Gabor filters introduce quite a lot of redundancy and in-turn, feature
channels. This is an area where significant improvement to the model is possible and work such
as [51] demonstrates that it is possible to reduce the number of feature channels by selecting a
small set of descriptive features using the structure tensor and non-linear diffusion.

There have been some other interesting developments in this area such as [52] which offered a
modified Mumford-Shah function with an alternative cartoon limit facilitating the integration of
statistical prior on the shape of the propagating contour. Consequently, the contour is limited to a
subspace of familiar shapes whilst remaining free to transform, scale or rotate. This concept of a
shape prior greatly improves the power of the segmentation technique on noisy or obscure
backgrounds. Other noteworthy extensions to Paragios and Deriche's active region model are
[53] where the optimized energy terms also take account of the number of regions and also the
idea of multiple-region segmentations, generalising the original active region model [46] to a
multi-phase model for improved results.

2.10 Summary

In this chapter a number of different approaches for image segmentation are reviewed. These
methods include snakes [29], contour-based segmentation [33][36], The Active Contour Model
(ACM) [12][32], The Geodesic Active Region Model (GARM)[49] and Hybrid segmentation
techniques [30][31]. Snake-based segmentation is a basic, well established mode of segmenting
images using the DMT (deformable model theory) [39], whilst contour-based segmentation
evolves this approach, providing methods whereby curve points are influenced by an energy

25
function (parametric active contour models) [38] or where one embeds a snake as a zero-level set
and solves the related equation of motion (geometric active contour model) [37]. The ACM
further improves the accuracy offered by these methods considering internal and external energy
parameters where sets of nodes lying on object edges may locate contours through the process of
energy minimization [32]. It is however unable to segment textured images well, an issue
addressed by the GARM [49]. The GARM introduces an increased level of segmentation
accuracy by extending a contour-based segmentation approach to consider both boundary and
region-based forces [49] - as it is one of the most accurate segmentation approaches available at
the time of writing this thesis, our focus in the next chapter will be exploring the integration of
wavelet-packet based feature data into the GARM.

26
Chapter 3

Wavelet-based Geodesic Active Region Model (WB-GARM)

Introduction

Different medical imaging methods expose different characteristics and with each method, the
differences in image quality, structure and visibility can vary considerably [54]. This poses a
particular problem when it comes to the task of segmentation, where a clinician may wish to
separate a particular region of interest (ROI) from the rest of the image for further analysis or
even operation [55]. The quality of a medical image is determined by several factors. These
include, but are not limited to: the type of equipment used, the imaging method employed and
the imaging configuration selected by the device operator [56]. The quality of the image
produced by an imaging method may be affected by six characteristics [54][57]: noise,
resolution, blurring, contrast, distortion and artefacts. These factors will be looked at in greater
detail in the next section.

There is however a finite amount of work which can be done to improve the segmentation
models used in these instances, and at some point, the question must be posed as to whether there
exists a method to improve the underlying texture data being fed into such an algorithm, in
addition to optimizing the model as well. From a segmentation perspective, the most important
artefacts in an image are the boundaries surrounding the ROIs. These image areas can be
particularly difficult to isolate, especially when dealing with medical images containing any of
the quality-affecting issues previously mentioned. One logical view of how to improve a
segmentation technique's ability to separate the foreground and background of an image is to
improve its visibility of the region-of-interest (ROI) boundaries. There are several methods by
which this boundary sharpening may be achieved with some techniques being more effective
than others.

27
3.1 Texture Descriptors

A critical aspect of texture analysis is the extraction of textural features which can be used as
input during the modelling phase [63]. This is a key step as the ability to select the most
representative features is directly related to the performance and discrimination power of a
texture description model.

In filtering based segmentation linear and non-linear operators are applied to input images which
create a multi-dimensional vector of responses (usually referred to as the feature vector). The
operators used are best selected if the feature vector describes a variety of different textural
properties. A significant body of work exists in the area of optimal filter selection such as [64],
where the output Gabor filter is modelled as a Rician Distribution and [65], where selected filter
parameters using an immune genetic algorithm are applied in order to maximise discrimination
between the multi-textured regions.

The three filters employed by the Geodesic Active Region Model are displayed below in
Equations (3.4) – (3.6).

The Gaussian operator g(x,y):


− x2 + y2
1
g ( x, y ) = e 2σ 2

2π σ (3.1)

The isotropic center-surround operator (Laplacian of Gaussian filter) l(x,y) is:

 − ( x2 + y 2 ) 
x 2 + y 2  2σ 2 

l ( x, y ) = S i(1 − )e
2σ 2
(3.2)
Where S is a scale factor and σ denotes the Gaussian standard deviation.

28
The two-dimensional Gabor operators analyze the image simultaneously in both space [σ], and
frequency domains [θ, φ].

−( j 2π (θ x +φ y ) )
g G ( x, y | σ , θ , φ ) = g ( x, y | σ ) e
(3.3)

We can decompose the above Gabor function into two primary components – the real part gR(x,
y | σ, θ, φ) and the imaginary part gI (x, y | σ, θ, φ). The texture features in the GARM are
captured by the spectrum analyser {s(σ, θ, φ)} of the two components. The concept behind a
spectrum analyser is to pass a signal of interest through a set of parallel narrow-band-pass filters.
The outputs of these filters are a measure of the signal's strength within the filter bandwidth.
When the filter is narrower, the power spectrum will have a higher frequency resolution.

Paragios and Deriche [66] opted for a large and general filter bank composed of isotropic and
anisotropic filters for use in their texture segmentation model. These filters provide good filter
responses for images with non-texturally complex backgrounds, however, based on our
experiments these filters are unable to assist in achieving desirable segmentations for detailed
medical images of poor quality or low contrast. The Isotropic, Anisotropic and Gabor filters
were also unable to assist in generating filter responses capable of accurately describing the
edges around certain textured real-world objects.

Although the GARM's standard filter bank provides the capability to help achieve
segmentations of desirable quality, there are many cases where an image may possess
properties which demand a more robust solution. These include images where there are
low-levels of contrast difference across separate regions, images containing areas of
similar texture which belong to different classes and images containing objects which
occupy a very small region of pixels. Two examples of work that can be improved,
from the material presented by Paragios and Deriche in [49] are the animal's legs and
upper body which have been misclassified as belonging to the wrong class.

29
3.2 The Wavelet transform

The wavelet transform is a transform which localises a function in both space and frequency and
replaces the Fourier transform’s sinusoidal waves by a family generated by dilations of a window
referred to as a wavelet. The transform can be visualised as a series of filter banks where each
bank is composed of a series of low-pass and high-pass filters. The number of scales an image

can be filtered to depends on its width - if the total length and width of the image is equal to 2N
,then N layers are possible. A 2-level discrete Wavelet transform of Lena may be viewed in
Figure 3.1.

Figure 3.1 – Wavelet transform of the well-known ‘Lena’ image

There exist many widely used wavelet algorithms which include Dauchechies and the
Biorthogonal wavelet. These approaches have a powerful advantage in that they provide a better
resolution for a n alternating series of data than more simpler approaches (such as the Haar
wavelet) currently offer. The above also have the notable disadvantage of being more
computationally expensive to calculate than Haar. In some cases the higher resolution offered by
the other wavelet types cannot be justified (depending on the type of data in question) which is
why in some cases, the Haar wavelet is chosen instead. The Haar wavelet has several advantages
including its conceptual simplicity, it's speed and it's efficiency - for example. Haar may be
calculated in memory without the need for a temporary data array. Haar isn't however without it's
limitations. When generating each set of averages and coefficients for the next scale, Haar

30
performs an average and a difference on a pair of values. The approach then shifts by two values
and calculated another average and difference on the next pair. Another issue is that all high
frequency changes should be reflected in the high frequency coefficient spectrum, whilst a Haar
window is only two elements wide. Essentially, if a large change occurs from an even value to an
odd value, this change will not be visible in the high frequency coefficients. Haar wavelets are
employed in a wide range of applications, primarily due to specific advantages it offers over
other methods. These are: (1) It is very fast, (2) it is simple and easy to understand, (3) It has low
memory requirements and is efficient as it can be calculated without the use of arrays and (4) It
can be accurately reversed without encountering visible artefacts such as with other transforms.
In spite of having a wide range of benefits, the Haar transform also comes with certain
limitations [67][68][69].

3.3 The Inverse Wavelet transform

The inverse wavelet transform allows the original data set to be recovered from a forward
wavelet transform by integration over all scales and locations – this is known as reconstruction.
For the inverse transform, one may make use of the original wavelet function as opposed to its
conjugate which is found in the forward transform. By limiting the integration operation over a
range of scales, instead of all scales, it is possible to perform basic filtering of the original data
set. The inverse wavelet transform reconstructs the original set of wavelet coefficients where the
elements involved are the scaled and translated wavelets. From a mathematical perspective, the
duals of the wavelet transform elements are considered the complex conjugates of said elements
- this however is only true for the continuous transform [70].

(a) Original image (b) Forward WT (c) Inverse WT

Figure 3.2 - Daubechies reconstruction of the “Nat-2B” image

31
Wavelets may be considered an extension of Fourier analysis which partition an image into a
series of multi-resolution components which capture fine and coarse resolution features based on
the scale used. Images are partitioned with respect to spatial frequency - which refers to the
frequency with which the image intensity values change. Partitioning is achieved by filtering the
signal with two dyadic orthogonal filters which are referred to as a quadrature mirror filter or
QMF. The two components of the QMF are called a "father" and "mother" wavelet. Whilst the
father wavelet captures an approximate or blurry version of the signal at consecutive resolutions,
the mother wavelet provides the detail at each resolution. Applying the WT to a two-
dimensional signal will return a matrix of coefficients which map the spatial relationships at
multiple scales across the vertical, horizontal and diagonal directions.

3.4 Wavelet Packets

Wavelet packets (WP), another class of the general discrete wavelet transform, offer far more
flexibility for the detection of oscillatory behaviour. The wavelet packet transform provides a
level-by-level decomposition of a signal whereby rather than dividing only the approximation
("father") spaces to construct detail spaces and wavelet bases, WP's split the details ("mother"
wavelets) as well as the approximations. WP's generate multi-scale texture feature data which
include detailed information about the ROI boundaries. These images can be used to aid in image
processing applications as they can simplify the task of describing key artefacts without
requiring computationally expensive routines. This makes them an ideal candidate for usage in
one of the key components of a supervised textu$re segmentation model - the texture descriptors.

Object boundary features (such as those typically found in the foreground) are captured well by
edge-detection methods and are expressed using high-intensity pixels. The boundaries of a gland
are one example of a relevant foreground object’s edges. Objects of a lower luminance (such as
background features) maintain a much lower-intensity. In order to generate an image with edge-
data at intensities required by the approach defined in this thesis, thresholding is applied to the
multi-scale WP's to allow only pixels of high-intensity to be preserved. Why These may be
referred to as layer 1 images. If the original source image is called layer 2, a new set of
boundary-emphasized images may be produced by individually overlapping each layer 1 on layer
2 by means of an addition operation.

32
3.5 The Forward Wavelet-packet transform

A wavelet packet transform is formed using a number of wavelet transforms. The standard
wavelet transform separates a signal space Si into an approximation space Si +1 and a detail

space Di +1 by dividing the orthogonal basis into two new orthogonal bases The Wavelet transform
calculates a low pass result using a scaling function and a high pass result through a wavelet
function where the low pass result is a smoothened version of the original signal. The low pass
result becomes the input to the next wavelet step, which generates another low and high pass
result until there is only a single low pass result to be calculated.One may view the Wavelet
Packet Transform as a tree - this is one the most commonly used analogies used to visualize it
and most certainly an intuitive one. Consider the root of this tree as the original image. The very
next level of this tree is the resulting output of one step of the WT. The other subsequent levels
are generated by recursively applying the WT to both the low pass and high pass filter results of
the previous step [63].

Figure 3.3 – Decomposition of a Wavelet packet tree.

From an implementation perspective, the Wavelet Packet Tree [Figure 3.3] is composed of two
key stages - the first involves filtering the source image I and sub-sampling it into four new

33
images which represent the spatial frequency sub-bands. Each of these sub-bands are further
filtered and sub-sampled into another four images - a process which one repeats until reaching a
certain pre-defined level. By maintaining the components in every sub-band at each level, the
Wavelet Packet Tree can obtain a complete hierarchy of segmentation in image frequency and is
thus a redundant expansion of the image. If desired, it is possible to improve this result for a
specific problem by selecting a best basis to represent the texture by cutting off branches of the
tree controlled by a cost function applied on a node and its children.

3.6 Cost functions

The decomposition of a signal into a wavelet packet allows one to obtain the representation of
the signal in an overly complete collection of sub-bands. This table can contain much
redundancy and so it is of benefit to have an algorithm which describes the whole data-set and is
able to find a basis which can provide the most desired representation of the data relative to a
particular cost function.

Cost functions may be chosen to fit particular applications - eg. in a compression scheme the cost
function may be considered the number of bits required to represent the final result [70]. When a
Wavelet packet tree is constructed, all of its leaves are marked with a flag. The Best Basis
calculation is performed from the leaves of the tree toward the root.

It is of note that in certain cases the Best Basis may be the same set yielded by a standard
Wavelet transform. There are other cases where the Best Basis may not yield a result which is
different from the original data set (suggesting that the original set is already the most minimal
representation available, according to the cost function).

34
Figure 3.4 – A cost function applied to the Wavelet Packet transform from Figure 3.3

In order to calculate the best basis the above tree is traversed and each node is marked with its
value of the cost function. When constructing the wavelet packet tree, every leaf is marked with
a flag which is modified when calculating the best basic set. This calculation is performed from
the bottom of the tree (ie from the leaves) towards the top (the root). Nodes at the bottom of the
tree (called leaves) return their cost value. As one recurses upwards to the root of the tree, the
cost of parent nodes is compared to the total cost values of its children.C2 is then considered the
sum of all the cost values for the children of the node. If C1 <= C2, the node is marked as part of
the best basis set. If C1 > C2, one replaces the cost value of the node with C2. On occasion, the
best basis set may be the same result obtained by the Wavelet Transform and in other cases, the
best basis may not obtain a result which differs from the original data set (ie. One may already
have the most minimal representation of the data relative to the cost function being used).

3.7 Weaknesses of the GARM

The GARM [49] was first introduced as a novel approach for segmenting textured images by
unifying boundary and region-based sources, where boundary information was determined
through the use of a probabilistic edge detector and region information through Gaussian
components of a mixture model. It was shown to be a more effective means of segmenting two-
class image problems involving a background and foreground than the widely-used Active
Contour Model (ACM) [32].

35
The GARM, although an effective segmentation algorithm, does however suffer from partial
misclassifications when applied to images containing what may be referred to as “complex”
textures. A complex texture may be found in texture descriptors containing detailed patterns such
as gradients, grids, dots and deformed lines of variable intensity. Exemplary and widely
published cases of this phenomenon may be viewed in [63] whereby segmentations of (1) the
cheetah and (2) the zebra do not segment along the correct ROI boundaries. Whilst this
observation does not remove from the GARM’s ability to provide useful segmentations, it does
call into question the level of accuracy it is capable of supplying applications.

In support of further evaluation of the GARM’s limitations, further segmentation results have
been generated to demonstrate particular aspects of segmentation accuracy which could be
improved upon.

(a) – A zebra in a field (b) – A wolf (c) – Microscopic cells

(d) – A cheetah in grass (e) – A Brodatz image

Figure 3.5 – Examples of texture segmentations output by our own implementation of the
Geodesic Active Region model

36
The images in Figures 3.5 (a)-(e) were sampled from a random distribution of real-world and
medical images with easily distinguishable foreground and background classes. The
segmentation results were obtained using the GARM supervised with 3 texture samples of each
class. As ascertainable from the above, these segmentations could be more accurate.

Figures 3.5 (a), (b) and (d) conducted on real world textured images demonstrate that the GARM
is capable of approximately segmenting the background and foreground in these samples,
however this distinction of separate classes could be greatly improved upon. For example, in
Figure 3.5 (a), the contour stops a distance from the object's true boundary. In Figure (b), a
similar scenario is observed and in Figure 3.5 (d), a slightly more texturally complex problem
due to the skin spots, the algorithm fails to form a contour around the edge of the main object but
also misclassifies part of the animals head as belonging to the background.

Figure 3.5 (c) is an enlarged group of cells which have also been segmented by the GARM. As
noted from the figure to the left, this result also suffers from quite a few misclassifications :
firstly, the curve does not segment the background areas inside clusters of cells (see the large
square nearest the right). Secondly, as can be observed from area lower down and also the area
to the left of the image – the contour does not lie as close to the object's edge as it would were
the image accurately segmented The bottom-left corner of the image also suffers from an
inaccurate segmentation as even open ended areas containing cells have not been well classified.

Figure 3.5(e) is a synthetically generated Brodatz image containing five distinct textures. On
allowing the GARM to proceed with a segmentation through 120 iterations, this is the result that
was obtained; to the base of the image can be observed where the algorithm fails to attract the
contour around the circular ROI in the foreground, instead resulting in misclassification contours
within the object’s boundaries. Such contours are also prevalent near the top of the image.
Overall, the quality of this segmentation could not be considered poor, however , as with the
previous figures there is space for improvement.

3.8 Improving the GARM

In recent years areas of computer vision such as medical imaging, where strong edge information
may not always be prevalent across the boundaries of an object to be segmented, has seen the

37
overall performance of purely contour based methodologies prove unreliable. This has led to a
class of region based segmentation models becoming increasingly important with additional
metrics such as image statistics being taken into account to provide more accurate results. Work
of note includes [75], [63], [76], [77], [78].

The region-based segmentation approach being examined by this thesis is the GARM, which
combines region and boundary based segmentation information to generate results of a
reasonably quality across real world, textured and medical images. Although efforts have been
previously made to improve on the current level of accuracy offered by the GARM, [78][79],
there has not been a great deal of emphasis placed on revisiting the problem of improving the
capture accuracy of its texture descriptors. This is a vital precursor to segmentation and any
enhancement of the quality of underlying data provided to a segmentation approach could have
large implications in terms of how much more clearly an object’s boundaries and separate classes
are represented.

Improving this stage of the GARM is of great importance as many if not all modern approaches
instead opt to tweak aspects of the region and boundary based segmentation paradigm. As
segmentation seeks to separate one part (or class) of an image from another, such optimisations
would focus around enhancing the visibility of ROI boundaries, thus easing the classification
problem of the GARM and possibly other segmentation approaches as well. As mentioned earlier
in this chapter, an approach whereby the object boundaries of an input image could be enhanced
such that this optimisation could yield improved segmentation results using a reliable, well tested
model (such as the GARM) would offer a non-complex path to enhancing the performance of
many supervised segmentation approaches. Wavelet packets have been suggested as a means to
achieving this goal, where the challenge lies in discovering how multi-scale wavelet packet
features be integrated into a model like the GARM to effectively (and consistently) provide
improved segmentation results.

3.9 A Wavelet-packet texture descriptor

The first step in integrating a family of wavelet packet features into a segmentation model (such
as the GARM) is to consider the texture descriptors as a paradigm independent of the supervised
segmentation algorithm. This allows ROI boundary optimization of the image. In a traditionally

38
defined implementation of the GARM, anisotropic and isotropic filters are integrated as part of a
Gabor filter bank in order to accurately capture texture features from a set of pre-defined texture
samples. These features provide the supervised learning data necessary to train the algorithm
such that an image segmentation with n iterations will provide a segmented image result of
reasonable accuracy.

Although algorithms such as the GARM do perform adequately with certain groups of synthetic
and real-world imaging problems, in many cases they are unable to achieve high-rates of
accuracy in images of particularly low-contrast difference, such as clinical biopsies in the field of
medicine – as shown in Chapter 5. The core problem being addressed by any enhancement
technique is thus to increase segmentation accuracy of a two-class image problem in cases where
suboptimal results are obtained using what may be considered sufficient training data of
acceptable quality. In reflection of methods previously discussed regarding the GARM, the
primary equation of interest to this research stems from the Gabor Spectrum Filter, used for the
generation of histograms in the Paragious & Deriche algorithm. This equation may be formally
defined as follows:

Computation of the Power Spectrum

S = Sv (txi * Rn , txi * I n )
(3.4)

Where txi is the current texture sample being processed; Rn and I n are the current real and

imaginary Gabor kernels and Sv is a function calculating the sum of squares for both sets of
terms. This thesis aims to examine the benefits of harnessing a multi-scale approach for
boundary enhancement. A multi-scale paradigm (such as the Wavelet packet transform) offers an
efficient characterisation of textural regions in terms of spatial frequencies making it an ideal
candidate for the extraction of additional boundary information.

39
3.10 A Pseudo-code description of the WB-GARM texture
descriptor enhancement technique

The primary steps involved in the texture extraction routine being examined are to:

1. CREATE an array of multi-scale wavelet packet sub-bands at a scale k

2. SELECT the sub-bands containing the most boundary information

3. ISOLATE the coefficients for each sub-band selected individually

4. CALCULATE the inverse wavelet packet transform of each band resulting in a feature
image I i
5. FILTER I i to isolate the edge data from the rest of the image

6. SUM the pixels generated by each texture ti with each F ( I n ) to generate a final set of
boundary-enhanced texture samples

7. I>PUT these samples to a contour-based segmentation algorithm to produce an improved


texture segmentation.

3.11 Generating Multi-Scale Wavelet Packet Texture


Features

The process of generating a Forward Wavelet Packet Transform (FWPT) at scale j results in the
creation of 2 2 j sub-band images containing a variety of texture based feature information. These
sub-bands are visually presented in grid-form and contain information from a pool of coefficients
for each sub-band.

40
Figure 3.6 – Forward Wavelet Packet decomposition. As displayed above, the Forward Wavelet
packet transform may be visually viewed in the form of a tree. At the root of this tree is the
original data set. The next level of the tree after this is the result of the one step of the wavelet
transform. All subsequent levels in the tree are created by recursively applying the Wavelet
transform step to both the low and high pass filter results of the previous wavelet transform step.

Figure 3.7 – IWPT Recomposition. In this figure, the Inverse Wavelet packet transforms works
up the levels of the tree, performing convolutions on each of the data arrays and reconstructing
the higher resolution data on each level. Each level of the tree is traversed in a similar way to the
FWPT in Figure 3.7. Destination arrays appear on the next higher level and are selected by
dividing the index by two. At the highest level, the destination array is the output data array.
When each convolution operation completes, the length of data is doubled in order to monitor the
interpolation operation during the convolution.

41
3 FWPT of the Lena image
Figure 3.8

The Inverse Wavelet Packet transform of the


the well known Lena image reconstructed from the
above packets should result in an image similar to the original, as Figure 3.10
3. (b) does.
does

(a) Original image (b) IWPT Reco


Reconstruction

Figure 3.9 – Using the 16 largest wavelet packet coefficients, which contain 98.6% of the signal
energy, we are able to create a perfect reconstruction of the image in Figure 3.10(a).
3.10 The
resulting reconstruction
reconstruction can be viewed in Figure 3.9 (b).

In order to generate the IWPT feature data , it is necessary to only retain the coefficients of
selected sub-bands.
sub bands. In order to focus on the coefficients of only one selected sub
sub-band,
band, the

42
coefficients of all other sub-bands are set to zero, resulting in an inverse feature image once the
IWPT is applied.

(a) Isolating Wavelet packet sub-band 2 (b) IWPT of sub-band 2

Figure 3.10 – Generating IWPT Feature data. For the purposes of demonstration, the second
sub-band across in Figure 3.10 (a) at Scale 3 is selected for use with the coefficients of all
remaining sub-bands being set to zero. Figure 3.10 (b) is the feature image generated using this
data after the IWPT has been applied.

Once a sub-band has been selected and all the others successfully discarded, the Inverse Wavelet
Packet transform is then applied to the wp coefficients in the feature frame shown in Figure 3.10
(b). The first observation that may be yielded on examining this result is that the IWPT is highly
pixelated. Edge-data of high pixilation is a problem which may be addressed by means of
convolution filters such as simple smoothing or a Gaussian of window size 3x3 to 6x6.

Although an available option, this particular approach to solving pixilation problems has been
previously attempted by [80] and later criticised for its use of smoothing operations on texture
descriptors [81]. One logical argument for avoiding the usage of more than 2 levels of WPT is
that as a result of their unacceptably low visual-resolution, the deeper levels are unable to assist
in boundary edge-enhancement - instead, noticeably reducing the clarity of boundaries when
integrated as part of a segmentation algorithm. At the first two scales, inverse WP feature images
are of a much higher resolution and are therefore more capable of aiding in the enhancement of
object boundaries due to their edges being more accurately defined.

43
Figure 3.11 - IWPT of sub-band 2 at Scale 2 with greater detail.

In Figure 3.11 we may see a summary of the steps required to create an inverse wavelet feature
image from a chosen sub-band. Collections of feature images may be generated from a selection
of specific sub-bands, an entire scale, or multiple scales (as is the case with the approach outlined
in this thesis).

Figure 3.12 – Creating Feature Images

44
3.12 Preparing WPF feature images for usage

As is the case with many contour-based supervised segmentation algorithms, the GARM utilises
two sets of texture samples as its primary source of training data. These texture samples ( Ts ) are
extracted from the original source and typically represent specific areas (or patches) of the
image.

In contrast, WP feature images represent the entire area of a picture and must therefore have
equivalent patches extracted if they are to be used in subsequent processes in place of Ts . To
facilitate this step, the implementation of wavelet packets used by this thesis allows GARM
texture samples to be defined as sets of coordinates. This renders the task of extracting wavelet-
based texture samples from the IWP frames a trivial procedure.

(a) (b)

Figure 3.13 - Figure 3.13. In this figure we can see a synthetic Brodatz image (3.13(a) and a
Wavelet packet feature image of the same 3.13(b). Brodatz images have become a de facto
standard in texture processing literature and provide a good set of homogeneous textures which
can also be used to testing the effectiveness of segmentation algorithms. In the above WPF
image we can see that a number of strong edge artefacts have been captured by the Wavelet
packet transform. Integrating this additional edge information into a filter bank (such as that
found in the GARM) can have a positive impact on segmentation quality as the algorithm has
more information about the image's topology.

45
(b)

(a) (c)
(c)

3.1 – Wavelet packet features


Figure 3.14 features of the well-known
well known “Brodatz” image.
image.In
n Figure 3.14(a),
3.14 we
can see the Wavelet packet transform of the Brodatz
Brodatz image presented in Figure 3.13
3.13. The
wavelet packets have been rescaled to offer a clearer image in print medium. Figure 3.14(b) - a
group of selected
selected sub-bands
bands taken from Figure 3.14(a)
3.14 – indexed from across these are sub
sub-bands
bands
2,3,5,7 and 9. These sub-bands
sub bands are selected for demonstration as they contain strong, interesting
edge features. It may be observed in these samples taken from
from the sub
sub-ban
bands
ds in Figure 3.14(b)
3.14(b)
that Wavelet packet feature images can contain three main types of variation in pi
pixel
xel intensities.
These are (i) areas
areas of mid-
mid-intensity,
intensity, (ii) areas of low-intensity
low intensity and (iii) edges of high
high-intensity.
intensity.
Despite many of the edges in these images being visible, there is a great deal of detail that can be
done to further emphasize these artefacts through simple adjustment in contrast.

3.13
13 Contrast adjustment of WPF images

Contrast is a measure of the sensitivity of our human visual system which manifests to us as the
difference in brightness between the light and dark areas of an image. A large problem with
certain histopathological images is that they may contain multiple regions with low levels of
contrast difference, rendering it more difficult
difficult to easily distinguish whether an area should
belong to a foreground class or a background class. This fact can affect the accuracy of

46
segmentation algorithms as well as interfering with thresholding techniques which rely on small
differences between regions being present. For example, if a supervised segmentation approach
is supplied with two texture samples - one from the foreground A and another from the
background B, each of which have low levels of contrast difference and similar pixel intensities
as a result of this, the algorithm can get confused as to whether a window it is currently looking
at should be classified as belonging to A or B. There is however a solution to this problem. As we
are dealing with images within a specific domain, we can analyze the variation in pixel
intensities of regions with similar levels of low-contrast, and thus adjust the contrast levels of
pixels which fall into particular ranges in order to create a more optimal image for the
segmentation algorithm to process. This is applied to the Wavelet packet feature images
using an algorithm known as the Michelson Contrast [84]. This is commonly used in image
processing applications when dealing with images containing an equivalent distribution of pixels
containing low and high intensities. In combination with appropriate rescaling, the contrast
adjustment algorithm offers a fast, computationally cheap method of further emphasising edges
and may be calculated using the following equation:

C Lmax − Lmin
I = ( Ii )
i Lmax + Lmin (3.5)

C
Where I is the contrast-adjusted texture image, Lmax and Lmin are the highest and lowest intensity
i

values for luminance and I i is the current wavelet packet feature image being processed. In

order to dim the background and increase the edge-intensities of object outlines, a standard
brightness filter is also applied in this step as a precursor to the rescaling stage.

Brightness is represented as an extension of the Michelson algorithm as follows:

C Lmax − Lmin
I = ( Ii [( pc + br )])
i Lmax + Lmin (3.6)

where pc is the colour components of each pixel and br denotes the increase in channel
brightness. This is the equation implemented as part of the solution presented with this thesis.

47
3.14
14 Rescaling pixel values

(a) Standard WPF (b) Contrast adjusted WPF

3. – An analysis of pixel value ranges


Figure 3.15

The above figures display the average range of pixel intensities for standard Gabor filter texture
descriptors and the contrast adjusted Wavelet-Gabor
Wavelet Gabor packet texture descr
descriptors.
iptors. Linear rescaling
of pixel values allows the scaling of pixel values such that they fall within a certain desired
range. The benefits of this are that the intensities of prominent pixe
pixels
ls can
can be moved from one
range to another, allowing them to fit a particular computer vision application (in the case of this
thesis, clinical histopathology images) without significant loss of feature representation.

Figure 3.15 (a) and (b) show (i) the range of pixel intensity values that may be observed when
generating WPF feature images and (ii) the range of vales obtained after contrast adjustment has
generating
been performed. Analysis of the average maximum and minimum edge
edge-intensity
intensity values that may
be observed in such images find that they lie between 0 and 110 respectively.
respectiv

Pixel rescaling must be intense enough to lower the overall intensity values to as close to zero as
possible in order to generate clearer boundary outlines from existing edge
edge-data.
data. Values must also
be high enough to ensure edges are not merely converted
converted to solid black lines or curves. This
requirement
quirement is enforced to minimise
minimise the loss of texture data surrounding object boundaries and is
kept to a bare minimum - attempting to segment objects containing such solid lines on their
interior can easily result
result in misclassification as contour-based
contour based approaches may consider them as
being independent to their parent objects.

48
Pixel rescaling is not performed on the uniform vector of pixels representing a WPF image.
Instead, rescaling is performed only on those pixels whose values fall outside of a range, R –
which is close to zero for the reasons stated in the previous paragraph. Rescaling is performed
based on increment steps defined by a value M, within a range N+M where N is any pixel value
greater or equal to zero. If a pixel p falls within this range, its intensity is lowered by q.

Where p is a selected pixel, M is a range increment,  is a starting point, Rmin is the start of a

range and Rmax is the end of a range.

if (( p >=  ), ( p <= (  + M )), (  > Rmin ), (  < Rmax ))


→ p = p−q

Figure 3.16 – Equation for threshold-based pixel rescaling

Where p= 70, M = 20, Rmin = 30 , Rmax = 150 and  = (40 + M)

if (( p >= 70), ( p <= (90), (70 > 30), ( p < 150))


→ p = 70 − 20 → p = 50

Figure 3.17 – Example of applied pixel rescaling

The application of this approach successfully results in the generation of a set of WP feature
images containing edge and boundary object information of low-intensity and high-contrast
which retain much of their texture detail around each edge.

Sub-bands selected from the set of Wavelet packet features for use in the new texture descriptors
are chosen based on the "usefulness" of the information they contain - certain sub-bands such as
those found at scale 2 {6,8,10 and 11}contain very strong edge and textural features which can

49
greatly assist in improving the Gabor filter bank currently used in the Active Region Model.
Other sub-bands
sub bands at this scale are discarded
dis arded as they do not contain suffici
sufficient
ent edge information to
be of great advantage. Sub-bands
Sub bands for the texture descriptors are also chosen based on their
resolution with the optimal sub-bands
sub bands for this particular application being found at scale 2 - scale
1 contains too few sub-bands
sub bands to obtain a sufficiently large breadth of edge information and scales
3 and above have a resolution which is too low to be helpful. By using between 4 and 5 sub
sub-
bands from scale 2 in conjunction with the GARM's filter banks, a greater set of texture
information can be supplied to help improve the descriptive powers of the texture descriptors.

(a) (b) (c)

18 – The effect of contrast-adjustment


Figure 3.1 contrast adjustment on WPF samples

(Above) Figure
ure 3.18
3. (a) – A WPF texture sample. Figure 3.1
3.18 (b) – the sample after contrast
adjustment has been applied. Figure 3.18
3.1 (c) – the sample after contrast adjustment and rescaling
of pixel values. These images, which are the final wavelet packet feature ima
images
ges (WPF) may
now be summed with standard texture samples to create texture
texture descriptors with emphasis
emphasised
boundary information.

3.1
.15 Pixel Addition

There exist two sources of texture data which may be harnessed for use with supervised texture
segmentation through the Geodesic Active Region model. These are the original set of Gabor
texture feature images which are generated as part of the GARM's filter bank and the new set of
Wavelet packet feature images generated using wavelet packets. The Gabor images ca
capture
pture basic
information about the image's textures and the WPF images capture a wide range of useful edge
information about the image taken from different subbands at a chosen scale. By combining
these two different image sets together through a process of
of arithmetic pixel addition, it is

50
possible to generate a third set of texture data which contains both good texture information and
strong edge data. This representation (as will be demonstrated) can have a positive effect on
segmentation quality.

Addition operation : I ( x, y ) = min[ I image1 ( x, y ) + I image 2 ( x, y ); I max ] (3.7)

As the range of pixel values across any colour channel is limited by Imin = 0 and Imax = 255 ,
arithmetic pixel addition holds the possibility of value overflow which can result in a clipping of
the pixel intensity. Clipping is a side-effect whereby if the new pixel's intensity is higher than Imax ,

its value will be set to Imax .The effects of this may however be avoided or reduced through pixels
rescaling prior to the arithmetic addition operation. In order to rescale correctly, an analysis of both
input images may be performed to estimate the maximum and minimum summed intensity values
( I1 + I 2 )max and ( I1 + I 2 )min .

(a) – Original colon histopathology image (b) – WPF image

51
(c) – Original + Rescaled image (d) – Emphasised boundaries

(e) GARM using texture Gabor texture samples (f) GARM using WPF texture samples
.

Figure 3.19- Visual walkthrough of proposed algorithm with a histopathological colon biopsy
image.

The first step in segmenting an image using the new Wavelet packet texture feature descriptors is
to input a set of texture samples from the image in Figure 3.19 (a) from both the foreground and
background. These are taken in as a set of coordinates from the original image and are referred to
as "standard" texture samples for the algorithm. A set of Wavelet packet feature (WPF) images
(3.19 (b)) is then generated for the image 3.19 (a). WPF based texture samples are then extracted

52
from these images using the coordinates previously provided for the "standard" texture samples.
A final set of texture samples are then generated by combining both Wavelet and "standard"
texture samples through a pixel addition operation - this allows the creation of a texture
descriptor which captures both strong edge features and strong texture information. A
demonstration of how this appears as a whole image may be seen in 3.19 (c). As may be
observed in Figure 3.19 (d), visible improvements have been made to the thickness and
continuity of object boundaries in the original image due to the addition of the Wavelet packet
features. In Figure 3.19 (e), an un-enhanced segmentation, it may be observed that the GARM
segments the foreground regions of interest a distance within the glands, misclassifying the
position of the object’s boundaries in the process – something that is considerably less prevalent
in 3.19 (f). This result of pixel addition has meant that the GARM segmentation algorithm is
now able to better gauge where the glandular object’s boundaries lie and is thus more capable of
attracting an accurate contour around the foreground regions of interest in the original image.

3.16 Adjustments for improved results in Medical


Applications

Although the new proposed method generates improved results across many image domains, one
area of particular interest is in medical image processing. In certain medical applications, such as
the application of texture segmentation to colon biopsy samples, a poor distribution of contrast
and gray levels can reduce the overall visible disparity between cells and can pose a serious
problem to the visual separation of glandular regions and objects that lie near its boundaries.
Contrast adjustment to an input-source in the pre-processing stages can lead to artefacts with
higher luminance, typically objects in the foreground class, to be further distinguishable from
areas of darker pixel intensity which may not be apparent from first glances.

Other types of images that suffer a similar contrast problem to colon biopsy samples are captured
bipolar cells in the retina, a problem addressed in [88] and Computer Tomography (CT images)
[89]. Fahey et al. [88] achieved some interesting results using contrast flashes of positive and
negative polarity applied to the central object of interest. Their results concluded in a 15-20%
increase in contrast on cells in front of a darker background. Computer Tomography is an area
where quite a lot of work into contrast adjustment has been done and methods previously

53
researched include adaptive histogram equalization (AHE) - which maps pixels of a source
image to the resulting image such that the histogram of the resulting image shows a uniform
distribution. [90]. A drawback to this method is noise over-enhancement addressed with more
recent work such as the interpolated AHE [91].

3.17 Summary

As presented in this chapter, enhancement of object boundaries through the use of wavelet
packet features (WPF) using pixel addition is an effective procedure for increasing the
probability of a contour-based segmentation model forming contours with a higher level of
accuracy around objects of interest. The process used harnesses the power of a set of multi-scale
wavelet packet decompositions, combing them with a set of pre-selected texture samples in a
low-cost computational step that may provide supervised segmentations far closer to a ground-
truth than a conventional segmentation model without such enhancement applied.

This improvement in accuracy is achieved by increasing the clarity of the boundaries belonging
to prominent objects in the image by utilizing additional data found in the WPF images to
describe their texture and emphasize where their boundaries end; in turn, heightening the Gabor
kernels ability to correctly capture texture patterns and thus represent the boundaries between the
foreground and background of the image being segmented. Due to the split between these
regions being much more clearly defined, the task of wrapping a contour more closely to the
object's true boundaries becomes more likely.

Although this thesis has focused on the application of this technique to the Geodesic Active
Region Model (GARM), a similar process may be applied to other contour-based segmentation
algorithms as either a pre-processing stage; whereby the boundaries and of a source image are
enhanced prior to segmentation; or as a dynamic process which extracts enhanced patches based
on specific texture samples being supplied to the segmentation model.

54
Chapter 4

Evaluation

Introduction

In order to evaluate the improved segmentation performance of the GARM with the new wavelet
packet texture descriptors presented in this thesis, experiments were conducted on a selection of
both real-world and medical images. Assessment of segmentation quality is a relatively complex
task for which there are currently standard solutions presently available, however a point-by-
point reference comparison along the curves generated were found to be an adequate
performance measure. This involves the usage of a ground truth image with specific points of
curvature through which a segmented curve must pass through in order to be counted as being
successfully segmented.

For the purposes of comparison, segmentation results from the WP-GARM will be compared
against those of the GARM for real world images and the GARM as well as the Active Contour
Model for histology images, as both have been extensively used in the field of image processing
with differing levels of success. As discussed in [49], one of the problems the original GARM
encountered with real-world images was an inability to consistently segment objects in the
foreground accurately. This trait also applies to particular types of medical images, as will be
explored in greater detail shortly. First, the results of Wavelet packet texture descriptors on a set
of real-world images will be presented

4.1 Results on a real-world data set

4.1.1 Data
Our data-set consists of 30 real-world images selected based on the quantity of primary textures
observed in them using the human visual system. Of these, a selection of the 3 best results are
presented based on their ability to demonstrate the strengths of the newly proposed algorithm.

55
mages presented were of resolution measuring 256x256 with a single channel of colour -
Images
greyscale. In particular, images featuring wildlife or buildings with distinct texture
characteristics such as dots, spots, skin or stone patterns
patterns were found to be an invaluable source of
interesting data for segmentation tests. In certain cases these featured obvious visual differences
along the boundary to the foreground and background whilst others presented challenges to the
perceptive system.
syste

Images such as these are an example of why accurate segmentation algorithms are a necessity –
the ability for computational process to segment objects in such images can oopen
pen up the door to
several real-world
real world applications including improvements to aut
automatic
omatic area detection in point-and
point and-
shoot photography. Three images of differing complexity were chosen from the above set to
compare the segmentation quality of the GARM and the WB
WB-GARM model.

Any particular image may be segmented by first selecting a set of foreground and background
texture samples and storing the positions of these samples as coordinates relative to the source
image. The algorithm takes these samples and then creates a separate set of Wavelet packet
texture descriptors which when combined
combined with the standard sample assists the Gabor kernels to
better segment the foreground based on its improved knowledge of object boundaries.

(a) Elephant (b) Cheetah 1 (c) Cheetah


ah 2

Figure 4.1 – ARM segmentation results

56
(a) Elephant (b) Cheetah 1 (c) Cheetah 2

ARM segmentation results


Figure 4.2 -WBGARM

(a) Elephant (b) Cheetah 1 (c) Cheetah 2

Figure 4.3 – Ground truth images


images featuring points of curvature

180
160
140
120
100 GARM
80 WB-GARM
60 Ground-truth
truth
40
20
0
(a) (b) (c)

Table 4.1 – Comparison of the segmentation quality between the GARM, WB WB--GARM
GARM and
Ground truths of the images in Figures 4.1-4.3
4.3 based on the points of curvature in each ground
truth image.

57
4.1.2 Analysis of real-world results

Figures 4.2 (a)-(c) demonstrate the ability of the Wavelet Packet texture descriptors to improve
the accuracy of a final curve propagation. As the Gabor kernels have been supplied with Wavelet
Packet texture descriptors containing enhanced boundary information the contour wraps around
the true object boundaries significantly more tightly than the un-enhanced standard GARM in
Figures 4.1 (a)-(c). Each of the images in this test set are processed for a number of iterations n -
the steps required for the segmenting contour to reach convergence around the regions of
interest in the image. Figure 4.2(a) displays an elephant whose final segmentation after 150 of
these such iterations has improved from that of Figure 4.1(a) – the curve has been attracted to the
actual object boundaries. As the foreground texture samples used for both enhanced and
unenhanced tests include the shaded area behind the elephant’s ear, a more complete
segmentation of the elephant as a whole is possible.

Figure 4.2 (b) (also after 150 iterations) presents improvements to how close the GARM has
been able to successfully segment the cheetah from its background. In comparison to Figure 4.1
(b) where the segmentation stops short of the object’s correct boundaries at almost all key points,
this is another example of where the WB-GARM wavelet packet texture descriptors offer a better
quality of result for real world images. Figure 4.2 (c) (after 200 iterations) is a difficult image to
conventionally segment due to the low differences in pixel intensity between the cheetah and the
grass behind it. The GARM segments most of the animal but does not exclude strands of grass
which overlap the cheetah’s head, nor does it correctly segment the cheetah’s tail. This is a third
example of wavelet-packets being an excellent source of additional image information which
may be harnessed to assist in achieving more accurate textured image segmentations using
existing models.

In many of the cases tested, the WB-GARM resulted in an improved segmentation with more
correctly classified ground-truth curve points than the GARM . Although the GARM correctly
classifies some of the regions, it fails to form contours on the exact object boundaries and
misclassifies the majority of the elephant’s trunk and a majority of its back as belonging to the
background. The WB-GARM prevented certain aspects of these misclassifications as has been
demonstrated above.

58
4.2 Glandular segmentation of histology images

Introduction
Validation of the improvements offered by Wavelet packet texture descriptors to the supervised
segmentation of medical images will be applied to the specific problem of Glandular
segmentation. Glandular segmentation is a challenge which spans across many areas of medical
histopathology including colonoscopy and the study of prostate images. In several cases, the
isolation of a particular area of a slide for further study, such as for the detection of Colon
cancer, is of pivotal importance in making an early diagnosis of the disease. As with many areas
of medicine, in histopathology, there is a significant amount of inter and intra-observational
variation between clinicians judgements of specimens which can lead to inaccurate manual
segmentations of ROIs. The absence of a single accurate observation to pathologists is the
motivation for assistance using computer analysis.

Many studies have looked at the problem of classifying histopathological images used in the
diagnosis of colorectal cancer [92][8] whilst other classification efforts have shown interest in
the segmentation of glands from biopsy slides [93][94]. The need for a computationally reliant
alternative to manual glandular segmentation stems from tedious steps required for current
colonoscopy analysis which can often include electronic cleansing techniques combining bowel
preparation, oral contrast agents before finally using image segmentation to extract lumen from
CT images of the colon [95][8]. Glandular segmentation can be thought of as a boundary
detection problem. Possibly the most essential element of glandular segmentation in conjunction
with region-analysis is the accurate segmentation of lumen (the interior part of the cell) from the
darker nuclei on their boundaries. Unfortunately, computational estimation of the lumen
boundaries can sometimes be a difficult task due to the low contrast difference between
attenuation values of the lumen and artefacts surrounding the outer walls of the gland which
occasionally share similar intensity values.

The WP-GARM achieves promising results in glandular segmentation. The performance of our
algorithm in histopathological applications was conducted on a set of five greyscale biopsy
samples with complex textures. For textural features, due to the inter-gland variance in lumen
surface texture, foreground samples and two background samples were chosen as priori. High
segmentation accuracy was achieved with the majority of tests with only minor error contours

59
being formed from misclassification. An optional source filtering technique to help remove
blood cells and other speckled content is applied in a pre-processing stage as this was found to
help avoid a majority of such misclassifications.

4.2.1 Background information on Colon Cancer

Colon cancer is amongst the leading types of cancer affecting the population of Great Britain
today, with high rates of incidence in England (36, 100 cases in 2002), Scotland (1014 cases) and
Northern Ireland (307 cases) [108][109]. On average there are 100 new cases reported every day
and as a result of increases in life expectancy, the frequency of it's occurrence is rising in the
ageing population. With regular screening, the disease can often be detected in its early stages
and treated quite effectively. It is here where a need for improved quantitative analysis of
histopathological images can aid in decreasing the time and work required to obtain a reliable
diagnosis. [96]. The vast majority of colorectal cancers are removed in a very advanced stage,
making the prognosis of the disease dependent on the depth of growth and spread of the tumour.
More than 85% of colon cancer arises from dysplastic polyps (growths on the lining of the colon
with abnormal cells) which may be present in a patient for 5-10 years before malignant
transformation takes place [97]. The type of cell that is responsible for forming the polyp varies
and is important in determining its potential for developing into a cancer. In order for screening
to be effective, the earliest phases of cancer need to predict certain features which may indicate a
rapid progression.

(a) (b) (c)


Figure 4.4 Three colon biopsy specimens featuring variations of glandular size, texture and
intensity.

60
The key feature of interest in diagnosing biopsy samples is one of size. Polyps longer than 1cm
in diameter are generally more likely to experience malignant transformation than those of
smaller size. The risk of developing carcinoma from a polyp is proportionally related to this
metric - typically 0% risk if the polyp is smaller than 5mm, a 1% risk if wider than 5 to 10 mm
and a 10% risk with sizes of 10-20mm [98].

4.2.2 Prior work in Glandular Segmentation

In previous classification approaches, which like image segmentation, share the goal of
distinguishing different classes inside an image, it has been shown that although advancements
such as the 2DPCA perform well on biopsy specimens, a high computational complexity and a
large dimensionality of features can lead to methods being inefficient. It is here that researchers
in classification have discovered that the use of local texture features can aid in minimizing the
size of a feature vector whilst maintaining a good level of accuracy [94][102].

Principal component analysis (PCA) is the study of multivariate data - it can be explained using a
simple analogy. One may imagine a painting that is drawn using a palette of n different mixtures
of paint where each of these was composed of different amount of a common set of pigments.
One may also imagine that spectral noise is present and that each point of the painting that could
be drawn was done so using only a single paint mixture. Principal component analysis may be
used to find the linear combination of pure pigments which was used to make each mixture. Each
of these mixtures is known as a principal component. The PCA is widely used in computer vision
problems requiring facial recognition and image modelling. The two-dimensional principal
component analysis (2DPCA) is based on 2D matrices instead of the standard PCA based on 1D
vectors. It is capable of obtaining a higher recognition accuracy than the standard PCA.
Medical image segmentation approaches have taken into account a wide variety of different
image features such as contrast, correlation, colour and inverse difference moment to
discriminate between benign and cancerous samples [103][104]. Unfortunately, due to the
complexity and quality of the biopsy images often being used (many suffer from irregular
shapes, sizes and poor contrast) as well as the sometimes sporadic nature of cancerous samples,
these approaches have been unable to yield reliable results in this area of medical image
processing.

61
(a) (b)

Figure 4.5 - Regions of interest in a manually segmented colon biopsy sample. In (a) wee see an
example of a highlighted colon gland featuring an area of lumen surrounded by a boundary and
in (b) we see the focused area of lumen which we wish to segment.

(a) (b)

(c) (d)

Figure 4.6 – A visual analysis of the difficulties encountered in glandular segmentation: (a)
(a) A
colon biopsy specimen featuring dark speckles of similar intensity to the nuclei around the
boundary of the lumen. (b)
(b) An absolute binary threshold of (a
(a)) which maintains large black

62
regions signifying areas of interest and speckles which interfere with segmentation at this
resolution. (c) A colour-map of the lumen (Ln) in the foreground class and in red and black are
the speckle artefacts we do not wish to include in our input to the WB-GARM (d) A processed
biopsy specimen which has had speckle artefacts selectively thresholded out. This leaves a
simpler two class classification and segmentation problem where the algorithm must separate the
boundary from the lumen.

Prior studies have performed lumen segmentation using a threshold region growing method
[105] harnessing the difference in intensity values between air and colon wall tissue to enable the
use of threshold methods to distinguish between the two different regions during segmentation.
Use of a threshold Level Set Method has also been attempted in studies of Colon-wall based
segmentation for increased accuracy [106] using a modified Active Contour model to attract a
level set to the object's boundary.

Although these methods are theoretically sound for very basic colon biopsy samples, more
complex samples which are either of low-intensity or featuring contorted lumen regions fail to
clearly segment using either of these algorithms. The above warrants further investigation into
contour based approaches for glandular segmentation and is an ideal candidate application for
the WB-GARM model . As our approach benefits from both boundary and region forces as well
as optimised wavelet feature vectors for improved texture segmentation, there is a possibility of
it being more discriminant than some of the prior methodologies outlined.

(a) (b)

63
(c) (d)

Figure 4.7 – A close-up of artefacts surrounding : (a) a sampling of the pixels found on the
boundary of the lumen (nuclei). In (b) we can see a similar sample of the cell speckles found in
the fluid surrounding the glands. As demonstrated by the histograms of both (c) – An intensity
analysis of the boundary nuclei and (d) – an analysis of the fluid speckles, the similarity in pixel
intensity between both samples can present a challenge when segmenting biopsy slides as both
texture samples have the fluid as a boundary and similar contrast properties.

4.2.3 Application of WB-GARM to Glandular segmentation

Our procedure for segmenting lumen from a biopsy specimen is as follows:

1. As the intensity levels of both the black nuclei found on the boundary of the gland and those
of the speckles in the fluid surrounding it are very similar, a process to remove these artefacts
is performed upon the image. The process which completes this is very simple and takes into
consideration properties such as height, width and the shape of the speckles - generally
circular in nature. By estimating whether a region found fits this profile, one may remove the
area of pixels containing it and thus decrease the number of speckles the segmentation
algorithm needs to handle. The product of applying cell speckle removal to an image I is
referred to as S(I).

64
2. A vector of Wavelet Packet feature images (WPµ) are generated at levels 1 and 2 using (S1) as
their input and saved to a local cache.

3. The WB-GARM is supplied with two list of co-ordinates F and B which represent the texture
samples to be extracted from SI for supervised learning. WPµ is also loaded into the workspace
at this stage and is used to create the texture descriptors.

4. Next, the original untouched source image I is loaded into the segmentation algorithm along
with SI (the version with cell speckles removed) – SI is the image input directly into the
segmentation approach. Here the segmenting contour is attracted to the boundaries of the
foreground objects in SI resulting in a segmented image which utilizes both Gabor and Wavelet
packet features to achieve its segmentation. As a final step, the pixels defining the segmentation
contours for SI are copied from the output of the WB-GARM and overlaid on I so that a cell
biopsy containing all original artefacts (including cell speckles) is generated for physicians to
use.

4.2.4 Results on Glandular Segmentation

The WB-GARM achieved promising results in this area. The experiments with histopathology
images were conducted on a set of five greyscale biopsy samples measuring 256x256 pixels on a
Pentium 1.6 GHz dual-core PC system. For textural features, due to the inter-gland variance in
lumen surface texture, a set of five samples was used for supervised segmentation; three from the
foreground class Fθ and two from the background class Bθ. Correct segmentation accuracy was
achieved with the majority of tests with only minor error contours being formed from
misclassification. Here, the form of supervised segmentation used is the same as that used in
Geodesic Active Region Model (GARM). The GARM requires priori texture samples from both
the foreground and background of an image that one wishes to segment. The added step of
removing cell "speckles" which appear in some of the histopahological images demonstrated in
this thesis can also help improve segmentation quality by lowering the quantity of regions which
could be misclassified as belonging to the foreground. One of the biggest challenges which
pathologists face when selecting a computational segmentation approach is finding one that can
appropriately handle the low levels of contrast difference between glands, cells and other regions
of the cell biopsy images. As our experiments demonstrate, Wavelet Packet texture descriptors

65
provide adequate feature vectors for this problem which can transform the Geodesic Active
Region model into a more robust tool for texture segmentation in medical imaging. The average
processing time using GDI+ and .NET in our C++ implementation to generate the necessary
wavelet packet feature images from levels one and two, based on a distribution of 12 tests, is
approximately 7 seconds. Sub-bands selected from the set of Wavelet packet features for use in
the new texture descriptors are chosen based on the "usefulness" of the information they contain
- certain sub-bands such as those found at scale 2 {6,8,10 and 11}contain very strong edge and
textural features which can greatly assist in improving the Gabor filter bank currently used in the
Active Region Model. Other sub-bands at this scale are discarded as they do not contain
sufficient edge information to be of great advantage. Sub-bands for the texture descriptors are
also chosen based on their resolution with the optimal sub-bands for this particular application
being found at scale 2 - scale 1 contains too few sub-bands to obtain a sufficiently large breadth
of edge information and scales 3 and above have a resolution which is too low to be helpful.

As discussed, selective sub-band use from multiple-scales was investigated, and although initial
experiments displayed a variety of improvements in certain tests, it is certainly an area that could
be researched more in the future. WB-GARM lab – a C# evolution of our implementation of the
Geodesic Active Region experienced processing times of between 8 and 15 minutes on biopsy
slides with a median segmentation time of 10 minutes. This was based on several factors: (1)
Image complexity, (2) the number of iterations to be conducted and (3) available system
memory. WB-GARM Lab typically required 120MB of system RAM during core processing
with an upper limit of 180MB and a lower limit of 90MB based on the test being conducted.

Similar quantitative measures for establishing the quality of segmentation results such as those
previously used in this thesis will allow an accurate evaluation of our algorithm against both the
Active Contour Model [32] and the Geodesic Active Region Model [67]. These approaches were
selected for two primary reasons. Firstly, they are both based on the deformation a contour model
through slightly differing methods - one is based on the concept of evolving a snake using a local
minimization of energy whilst the other deforms its final contour based on region and boundary
forces. The second reason these approaches were chosen is that their results will offer an
evolutionary view of texture segmentation as a problem domain. ie. The GARM evolved from
the ACM, and the WB-GARM is an evolution of the GARM. The performance of these methods
will offer an insight into the proposed approach.

66
The results of this from the ACM, the GARM and the WB-GARM will be presented below. The
setup and configuration information for each model's implementation are also listed for reference
purposes.

4.3 Segmentation Setup

4.3.1 Data

The human colon tissue samples used for testing purposes in Glandular segmentation were
acquired by colleagues from Yale University School of Medicine from archival hematoxylin and
Eosin stained micro-array tissue sections. The original images used were of size 1080x1024 with
no additional sub-sampling or compression applied to the input. A window size of 60x60 was
employed here.

4.3.2 Texture

For each directional window (Wn) of size 96x96 being examined, texture sampling focused on
extracting the largest possible regions from Wn for each distinct texture and typically measured
[20x20], [32x32], [48x48],[64x64] or [96x96]. Good results were achieved by opting for three
lumen surface textures for the foreground class and three background class samples composed of
one boundary (dark gray-black) area and two fluid areas containing black coloured cells. Texture
samples of sizes less than [14x14] (typically fluid cells) were found to be ineffective in assisting
the description of texture in this application which is one of the basis for integration of a
dedicated thresholding step for the reduction or removal of miniature artefacts prior to
initialisation of the texture segmentation model.

This has proved quite effective as can be seen by the experimental results presented later in this
Chapter. The requirement for a thresholding step does not limit the discrimination abilities of the
model as can be observed by the marked improvements over the GARM in contrast tests, which
are free of initial thresholding.

67
4.3.3 Ground truth generation

Manual image segmentations are created by human observers (such as physicians) who plot a
line around the main objects of interest in an image. Once this process has been completed, an
average of all the manually segmented images available is made in a computational step that
creates the Ground truth image that is used for comparison with computer-based segmentation of
the same photograph.

When one is creating a manual segmentation, indentations, corners and curves are drawn around
the boundaries of objects of interest. On a computer, these are visually created using tools (such
as a Bezier) which allow one to draw a parametric curve based on a series of points - one for
each change in direction the line being drawn takes. For the purposes of evaluating the contour
curves generated through manual segmentation and those generated using computational
methods, we devised a method for segmentation comparison based on this concept.

Using the digital Ground truth image, we trace around the manually segmented boundaries of the
objects of interest using parametric curves via a freeform curve tool. This can be done in any
popular Image Editing suite and allows the generation of a point-based bezier curve along the
same path of pixels which define the segmented boundaries. There is no loss of accuracy
incurred. The same approach is applied to the image output by the segmentation approach being
tested which results in two groups of bezier-points which may be compared by overlaying one
over the other.

The number of bezier-points in the algorithmic-segmentation which pass through the same points
as the Ground truth points allow one to compare how close the segmentation was to what we
would ideally desire. This is called the pass-through rate. A 100% pass-through rate (
pointsgroundtruth/pointsalgorithm * 100) would indicate that the algorithmic-segmentation correctly
segmented all the boundaries of the objects of interest whilst a lower pass-through rate would
suggest that certain areas of the image's foreground or background may have been incorrectly
classified.

68
SETUP I>FORMATIO>

Active Contour Model (ACM)

σ (the scale parameter in the Gaussian kernel for smoothing) = 1.5, timestep = 5, µ (the
coefficient of the internal energy term φ = 0.04, λ (the coefficient of the weighted length term
Lg( φ )= 5 , α (the coefficient of the weighted area term Ag ( φ ) = 1.5 and the average value of n
(the number of iterations) = 400. Average processing time = 3-4 minutes.

Geodesic Active Region Model (GARM)

Number of foreground textures used = 3, number of background textures used = 3, size of Gabor
range = 100, number Gabor kernels used = 4 per component, average number of iterations to
stable final contour = 150, implementation specific average processing time = 9-12 minutes.

Wavelet-based Geodesic Active Region Model (WB-GARM)

Initial threshold processing applied to each source image. Number of foreground textures used =
3, number of background textures used = 3, size of Gabor range = 50, number of Gabor kernels
used = 4-20 per component (based on no. wavelet packets used), wavelet packet sub-bands used:
all sub-bands from levels 1 and 2 (4+16 =20), average number of iterations to stable final
contour = 140, implementation specific average processing time = 10-15 minutes.

4.4 Results on images without the thresholding of


lymphocytes

In this set of images, minor contrast adjustment was applied to the original sources to enhance
the luminosity of areas that fall inside the nuclei boundaries. Such morphological filtering of the
input is not necessary when the lymphocytes (speckles found outside the lumen boundaries) are

69
thresholded to reduce their visibility in the image. The results of lymphocyte thresholding will be
thresholded
presented shortly.

(a) (b) (c)

4. - Glandular segmentation results without lymphocyte-thresholding


Figure 4.8 lymphocyte thresholding.

Figure 4.8
4 (a): Encouragingly, at many points the contour has been attracted to the desired
boundary edges,
edges, but minor misclassifications have been made around some of tthe
he lymphocytes
to the left of the image.
image The desired boundary edges in an image are the line of distinguishable
points between the foreground and background classes as indicated by the boundaries in an
image's ground truth. A good segmentation is defined as one where the majority of (Bezier
(Bezier-
traced) line points of curvature for a segmentation overlay the same points as those defining the
boundaries in the image's ground truth. As discussed, a deformable point
point-by-point
point Bezier version
of a contour may be manually
manually generated by tracing over a segmented image’s boundaries using a
freeform Bezier tool. A typical quality threshold for points that successfully overlap the ground
truth is 70-80%.
70 80%. A poor segmentation would have less than or equal to 50% of the true po
points
ints of
curvature and below as this suggests that at least half the areas segmented did not capture the
correct object boundaries. Figure 4.8 (b): In this result, correct segmentation of the lumen areas
are made, but there are still points of curvature that
that the segmentation does not pass through
correctly - due to low contrast difference between the boundary texture samples and the speckle
texture samples, a perfect segmentation is not achievable. Figure 4.
4.8 (c): This is perhaps the most
interesting of the three specimens in this group. As the colon lumen contain a more uniform
intensity than in either of the previous figures, segmentation become relatively easier,
demonstrating that if a sample from this class of image does have a balanced variance in

70
contrast, more accurate segmentations are possible. In reference to With histopathology images
of this nature there can appear certain artefacts in test images with texture features similar to the
glands we wish to segment. These are typically cells which are round, small and speckled in
nature. In an ideal segmentation these cells would be classified as part of the background. In
some cases however, due to the similarities in topology, they can get misclassified as belonging
to the foreground class. Here, contours which are formed around them erroneously are known as
error contours and shall be addressed shortly.

4.5 Results on images using lymphocyte thresholding

In this section we will compare the results of four colon biopsy slides whose lymphocytes have
been segmented using the same distribution of texture samples as the previous result set using an
additional pre-processing step to aid the segmentation.

White blood cells help the human body to fight against diseases and infections. Lymphocytes are
a type of small white blood cell, usually 7-8 micrometers in length, which are present in the
blood. Their purpose is to help provide a specific response to dangerous micro-organisms when
they have infiltrated the body's main defence systems. Lymphocytes also help to protect the body
from tumours - tissues which grow at an accelerated rate than normal. Physicians involved in the
area of histopathology may be required to distinguish lymphocytes from other cells in a biopsy
slide. The center of a lymphocyte consists of large groups of thin threads called chromatin. When
stained with a stain known as Wright's stain [ref], the nucleus of a lymphocyte appears dark
purple. It is usually round in shape surrounded by a small quantity of blue cytoplasm (a part of a
cell enclosed by a plasma membrane) but can also appear indented.

One of the major problems in segmenting lymphocytes in histopathology images is that


segmentation models such as the GARM may incorrectly classify speckles surrounding these
objects as being part of the image foreground due to the small size of the area occupied by each
speckle. To overcome this problem, a pre-processing step applies a Gaussian filter of size 7x7 to
the source image. A segmentation of this image is then made using purely the enlarged
lymphocytes as the foreground ROIs. The resulting output is an image containing a range of
contours (and pixel positions) which may be ignored when outputting the contours for our main
segmentation on the source using the selected segmentation approach.

71
4.5.1 Specimen 1

(a) – Original Image (b) Observer #1 manual contours

(c) Observer #2 Contour (d) Observer #3 Contour

4.9 Hand Labelling.


Figure 4.9: Labelling Based on the manually drawn contours as shown in Figure 4.9 (b)-
(b)
(d), a sampling of points is taken from each curve and averaged in order to produce a ground
truth image (or Gold standard) which may be used to compare to the outputs from existing
algorithms.

72
(a)) ACM at 560 iterations
iterations (b) Average Inter
(b) Inter-observer
observer manual segmentation

(c) ARM (d) WB--GARM

Figure 4.10 – A segmentation comparison between the ACM, ARM and WB


WB-GARM

73
4.5.1.1 Statistical analysis of results for specimen 1

195

180
180

160

140

120 ACM
100 GARM
WBGARM
80 Gold
60 Standard
42
40

20 14

0
Comparison of results

(a) 195 boundary points in the ground truth (b) – Boundary point comparison

Figure 4.11 – Boundary point comparison for Specimen 1. Figure 4.11(a) A display of the
number of unique boundary points found in each curve of the source image’s ground truth.
Boundary points are estimated and plotted based on the following rule: where the curve’s path
changes direction, a boundary point is plotted. A straight line only has two points (the beginning
and end) but a curve may have many points where the path of the curve’s line changes. Figure
4.11 (b) A comparison of the number of the number of a segmentation’s curve points tht correctly
pass through those of the ground truth (ie. The number of boundary points correctly
segmentated).

Table 4.2 – Table of Algorithmic comparisons

Algorithm % points on correct % points outside correct % of these. points within


Boundaries Boundaries 3 pixels of boundaries

ACM 7 93 11.8

GARM 21.5 78.5 14.1

WB-GARM 92.3 7.7 40.3

74
As may be observed above, the ACM performed the least well on this specimen. This comes as
no surprise as the algorithm was not designed for use in complex texture segmentation
applications. GARM, which has been shown to work reasonably well with synthetic texture
images performed a little better, however neither of these were able to achieve a segmentation as
close to the average of the intra-observer contours as the WB-GARM model. A combination of
application-specific thresholding and sharp mixed-model texture descriptors helped it achieve a
92% closeness to the gold standard.

4.5.1.2 The effects of contrast-adjustment on segmentation


quality

It has been shown that certain medical images including both colonoscopy [107] and colposcopy
[108] categories can benefit from using contrast and brightness adjustment to improve the
visibility of images prior to using them in computational processing. While this has been touched
upon through in-class contrast adjustment and optional source-adjustment (unsharp masks), the
results from experiments on the medical-image dataset were done independent of further contrast
changes as this allowed the presentation of additional benefits of using the WB-GARM with
other morphological image enhancement techniques.

Our experiments with texture descriptors in the GARM have concluded that applying contrast
adjustments to a histopathological image as part of a pre-processing stage may enhance the
quality of some segmentation results. For this reason, contrast adjustments comparing (1) The
GARM, (2) The WB-GARM and (3) the ground truth of an original image with contrast applied
will also be presented. The first example of contrast adjusted presented below will demonstrate a
case where contrast adjustment does not offer a large improvement with all subsequent examples
featuring contrast adjustment demonstrating the benefits of this adjustment.

75
(a) – GARM after 200 iterations (w/Contrast) (b) – WBGARM after 200 iterations (w/Contrast)

190

180
160
140
120 GARM
102
100 WBGARM
(without
80 threshold-
63
ing)
60
Gold
40 Standard

20
0
Comparison of results

(c) Contrast adjusted ground Truth (190 pts) (d) Correct boundary-point comparison

Figure 4.12 – Comparison of results after contrast adjustment. As can be observed, the
results in Figure 4.12(a) and Figure 4.12 (b) appear at first glances to be quite similar. Although
the WB-GARM offers very minor improvements, this example does not contain a lot of low-
level contrast differences and thus does not hugely benefit from the adjustment.This version of
the implementation does not include an additional threshold filter for artefacts surrounding the
glands – a feature which offers some further improvements and is presented visually shortly.
`

76
4. – Comparison table for segmentation results after contrast adjustment
Table 4.3

Algorithm % of points
oints passing % of points outside % of points that are within
through correct correct boundaries 3 pixels of the boundaries
boundaries

GARM
ARM 33.1 66.9 13.6

WB
WB-GARM 53.6 46.4 19.4

4.5.2 Specimen 2

(a) Original Image (b) Ground truth

(c) ACM (d) GARM

77
(e) WBGARM

Figure 4.13 – Segmentation comparison

4.5.2.1 Statistical analysis of results for specimen 2

132

120

100

80 71
ACM
GARM
60 WBGARM
Gol d
40 Standard
24

20 11

0
Algorithmic Comparison

(a) 132 boundary points in the ground truth (b) Correctly segmented points

Figure 4.14– Boundary point comparison

78
4. – Table of Algorithmic Comparisons
Table 4.4

Algorithm % points on correct % points outside correct % of these. points within 3


Boundaries
oundaries Boundaries pixels of boundaries

ACM 8.3 92.7 3.3

GARM
ARM 18.1 81.9 16.8

WB
WB-GARM 53.8 46.2 21.6

In a similar
similar case to the first specimen, the ACM was unable to form contours around the key
areas of interest in the foreground (ie. the
t lumen).
n). Both the GARM and the WB
WB-GARM
GARM formed
a large quantity of erroneous segmentations in the lower half of the image, however the WB-
GARM was successfully able to segment the lumen with a reasonable level of accuracy whilst
the GARM
ARM only formed contours around lumen within a short proximity of the texture samples
used.

4.5.2.2 Improvements obtained through contrast adjustment

The morphological operations applied to the sample for this experiment include the following: A
40% increase in contrast was applied to the specimen with a 20% increase in brightness. The
low intensity areas of the image were further enhanced by applying a darkening filter to
low-intensity
emphasize the borders and lymphocytes around each area of lumen.

(a) GARM after 150 iterations (b) WB


WB-GARM
GARM after 150 iterations

79
135

120

100 94

80 GARM
WBGARM
(without
60
thresholding)
Gold
40
Standard

20
7

0
Comparison of results

(c) 135 ground-truth boundary points (d) WBGARM after 150 iterations

Figure 4.15 - Comparison of results after contrast adjustment

Table 4.5 – Comparison table after contrast adjustment

Algorithm % of points passing % of points outside % of these points that are


through correct correct boundaries within 3 pixels of the
boundaries boundaries

GARM 5.1 95.9 10.9

WB-GARM 69.6 30.4 14.6

The contrast adjusted Specimen 2 offers more insights into the GARM's pre-processing
dependence in order to be effective on this application's distribution of images. The GARM on
its own is unfortunately insufficient for segmenting this type of medical image without further
assistance. In retrospect, the WB-GARM (while forming many incorrect segmentations), did
manage to form perfect contours in some cases with 87 more points falling on the boundary than
with the GARM.

80
4.5.3 Specimen 3

(a) Original Image (b) Ground truth

(c) Active Contour Model (d) Active Region Model

(e) WB-GARM

Figure 4.16 – Segmentation comparison

81
4.5.3.1 Statistical analysis of results for Specimen 3

222

200

162

150
ACM
GARM
WBGARM
100
Gold
65 Standard

50
20

0
Comparison of results

(a) 222 boundary points in ground truth (b) Boundary point comparison

Figure 4.17 – Boundary point comparison

Table 4.6 – Table of Algorithmic Comparisons

Algorithm % points on correct % points outside correct % of these. points within


Boundaries boundaries 3 pixels of boundaries

ACM 9 91 4

GARM 29.2 70.8 10.1

WB-GARM 72.9 27.1 20.8

With more points on the boundaries than any of the previous specimens, this could be considered
the most texturally complex slide tested so far. From a lumen-segmentation perspective, the
Active Region model performed quite poorly, incorrectly capturing parts of the background after
150 iterations. In comparison to the ACM and GARM the WB-GARM performs quite well,
creating reasonably promising contours around the main areas of lumen. There is however some
room for improvement here as a desirable percentage of correct boundary points segmented
would be closer to that of Specimen 1.

82
4.5.3.2 Improvements obtained through contrast adjustment

(a) GARM
GARM results after
er contrast adjustment (b) WB
(b) WB-GARM results on contrast adjustment

229

200

150 138
GARM
WBGARM
(without
100 thresholding)
Gold
Standard
50
12

0
Comparison of results

(c) Ground
nd truth of contrast adjusted source–
source 229 points (d) Boundary-
Boundary comparison

4.1 - Comparison of results after contrast adjustment


Figure 4.18

83
Table 4.7 – Comparison table for segmentation
segmentation results after contrast adjust
adjustment

Algorithm % of points passing % of points outside % of these points that are


through correct correct boundaries within 3 pixels of the
boundaries boundaries

GARM
ARM 5.2 97.5 8.2

WB
WB-GARM 60 40 12.3

This set of results requires analysis


analysis from two perspectives: (a) which algorithm more correctly
segments contour points falling on the majority of the
the lumen
lumen-nuclei
nuclei boundaries and (b) which
model creates a reasonable result with fewer error
error contours. In respect to (a) it is clear that the
WB
WB-GARM model (with 138 correct
correct boundary points) performs a significantly better job of
segmenting lumen than it's counterpart – however, with respect to the second point, the GARM
ARM
forms far fewer
fewer error contours than the WB-GARM
WB GARM creating a reasonable result that is much
more free of incorrect
incorrect segmentation artefacts.

4.5.4 Specimen 4

(a) Original Image (b) Ground truth

84
(c) ACM (d) GARM

(e) WB--GARM

4.1 – Segmentation comparison


Figure 4.19

85
4.5.4.1 Statistical analysis of results on Specimen 4

312

300
268

250

200 ACM
GARM
150 WBGARM
Gold
100 91 Standard

50
22

0
Comparison of results

(a) 312 boundary points in ground truth (b) Boundary point comparison

Figure 4.20 – Boundary point comparison

Table 4.8 – Table of Algorithmic Comparisons

Algorithm % points on correct % points outside correct % of these. points within


Boundaries Boundaries 3 pixels of boundaries

ACM 7 93 5

GARM 29.1 70.9 13.76

WB-GARM 85.8 14.2 68.18

Due to the widespread occurrence of indentations along the borders of lumen in Specimen 4,
there was an increase in the number of points required to define the ground truth boundaries.
This raised the quality level required by any segmentation algorithm as there was a tightened
restriction on how deviated a contour could be and still fall within a reasonable number of
boundary points to be considered promising. The quality of the GARM’s result becomes a little
more clear and despite creating contours of average quality around the main areas of interest,
their sporadic nature and discontinuous properties fail to make them an adequate model for use
in medical applications. Once again, the ACM demonstrates that although it is a worthy tool in

86
simple segmentation tasks, this is not an area where they can excel without alteration. Unlike the
other two models, the WB-GARM manages to create an acceptable result with only a small
quantity of uncaptured areas. Its contours pass through 85.8% of the stringent points laid down
by the vertices and could be used in real-world applications with minor improvements.

4.5.4.2 Improvements obtained through contrast adjustment

(a) GARM Contrast Result (b) WB-GARM result after contrast adjustment

245

200

153
150 GARM
117 WBGARM
(without
100 thresholding)
Gold
Standard
50

0
Comparison of results

(c) 245 Boundary points in ground truth (d) Boundary comparison

Figure 4.21 – Comparison of results after contrast adjustment

87
Table 4.9 – Comparison table of segmentation results after contrast adjustment

Algorithm % of points passing % of points outside % of these points that are


through correct correct boundaries within 3 pixels of the
boundaries boundaries

GARM 47.7 52.3 13.26

WB-GARM 62.4 37.6 16.9

4.5.5 Specimen 5

(a) Original Image (b) Ground truth

88
(c) ACM (d) GARM

(e) WB-GARM

Figure 4.22 – Segmentation comparison

89
4.5.5.1 Statistical analysis of results for Specimen 5

398

350 332

300

250 ACM
GARM
200
WBGARM
150 Gold
Standard
100 79

50 31

0
Comparison of results

(a) – 398 boundary points in ground truth (b) Boundary comparison

Figure 4.23 – Boundary comparison

Table 4.10 – Table of Comparative Results

Algorithm % points on correct % points outside correct % of these. points within


boundaries boundaries 3 pixels of boundaries

ACM 7.7 92.3 3.7

GARM 19.8 80.2 19.1

WB-GARM 83.4 16.6 37.7

The GARM performed better with this specimen than it did with some of the prior tests.
Unfortunately due to the complexity of the lumen borders and the increased resolution of each
area to be captured, it was unable to meet the criteria for falling on the majority of the 398
boundary points of indentation in the ground truth. The WB-GARM, however, managed to fall
through 83.4% of these points with minimal error-artefacts being captured in the foreground.

90
4.5.5.2 Improvements obtained through contrast adjustment

(a) GARM Result after contrast adjustment (b) WB-GARM Segmentation after
contrast adjustment

395

350

300
263
250 GARM
WBGARM
200
(without
thresholding)
150
Gold
100 Standard
54
50

0
Comparison of results

(c) 395 boundary points in contrast adjusted (d) Boundary comparison


ground truth

Figure 4.24 - Comparison of results after contrast adjustment

91
Table 4.11 – Comparison table for segmentation results after contrast adjustment

Algorithm % of points passing % of points outside % of these points that are


through correct correct boundaries within 3 pixels of the
boundaries boundaries

GARM 13.67 86.33 8.2

WB-GARM 66.5 33.5 12.3

Once again, we are presented with two segmentations of varying quality. The strict nature of the
evaluation technique being used leaves many of the GARM contours (which fall on the incorrect
boundary) not being counted, resulting in a poor pass-through rating. The WB-GARM here
performs to a higher level of accuracy but is not as high as the previous non-contrast experiment.
This is still a usable result which highlights the WB-GARM’s ability to perform well with and
without external morphological operations outside of those within the model itself. Overall
across both tests, the WB-GARM’s demonstrates that it is capable of providing improved
segmentation results over those offered by both the ACM and the GARM.

4.6 Overview and discussion of results


100
90
80
70
60
ACM
50
40 GARM
30 WB-GARM
20
10
0
Sample 1 Sample 2 Sample 3 Sample 4 Sample 5

Figure 4.25 - Percentage of correctly segmented boundary points – a distribution comparison

92
4.6.1 Summary of the algorithm’s performance

As can be observed in Figure 4.13, the above results which have been evaluated using boundary-
point vertex models based on the average intra-observer contours, show a clear indication that
the new Geodesic WB-GARM offers a significant improvement in lumen segmentation of the
colon glands than the other two models analysed. The advantages attained by the description
enhancements provided by the Wavelet Packet basis are further reaffirmed by the WB-GARM’s
ability to perform well in both original and contrast-adjusted colour spaces. From both visual and
quantitive analysis, the results appear to be promising.

4.6.2 Areas for improvement

Certain results still appear to be affected by minor misclassifications of the foreground, resulting
in minor error-contours. While the majority of these artefacts have been effectively handled
using initial speckle thresholding, there is still a lot of room for improvement. One area where
improvements could be made is speckle thresholding, which could be advanced to include
adaptive matrices with more complex intensity maps to increase the probability of successfully
cleaning of the human colon tissue samples prior to segmentation being initialised.

4.6.3 Summary

The approach for image segmentation previously proposed has been adapted to solve the
problem of glandular lumen segmentation of human colon tissue. Our approach uses a
combination of boundary and region forces coupled with enhanced Wavelet Packet texture
descriptors which allow us to more clearly define detailed boundary information than the original
source algorithm, the Geodesic Active Region Model. The resulting framework, the WB-GARM
model has been optimized for use on the particular application of lumen segmentation with the
inclusion of exterior speckle thresholding for further improved results. Based on a median
performance rating of 83.4% (the closest to a perfect result achieved was 92.5%), The WB-
GARM seems to be a good candidate for usage on real-world histology images and would offer a
resourceful computational alternative for segmenting lumen in histopathlogical applications.

93
Chapter 5

Thesis Summary and Conclusions

5.1 Summary

The subjects of this thesis are: (i) the proposal of a Wavelet-based texture descriptor with
boundary enhancement for use in texture segmentation applications and (ii) the effects of this
texture descriptor when compared to the conventional unenhanced Gabor filters found in an
existing region-based segmentation approach - the Geodesic Active Region model. This
comparison is made in particular between the segmentation results of the newly presented
method and the GARM's untouched Gabor filters on a set of grayscale histopathlogical images,
where the improvements offered by the new method are promising.

Chapter 2 investigated existing supervised methods used to segment textured images. A


summary of these included discussions of edge-based, region-based and finally contour-based
segmentation models which are prevalent in medical imaging for the past few years. In reference
to snake-based models - Active Contour Model and Geodesic Active region model were
reviewed in great detail as they which directly relate to new texture descriptor proposed in this
thesis. The latter, GARM is used as part of a demonstration on how the new texture descriptors
can be used in conjunction with existing Gabor filter texture descriptors to provide an improved
segmentation solution. Morphological image enhancement methods such as unsharp masks and
sharpening were also illustrated in this chapter as part of an investigation into techniques for
improving the visibility of object boundaries inside a source image.

Chapter 3 introduced a new Wavelet-based texture descriptor with boundary enhancement which
is the methodology proposed in this thesis. The chapter began by detailing Wavelet-theory
including forward and inverse Wavelet and Wavelet Packet transforms. Due to their multi-scale
nature these were found to provide a wider set of edge and boundary information than other
edge-detection methods. It was also shown that wavelet packets are capable of representing some

94
of the primary edges of image objects in such a way their features may be morphologically
altered to generate a set of images containing only region outlines. Sample "patches" from
images in this set were then taken and combined via a pixel additional operation with equivalent
patches from the foreground samples supplied to the Gabor kernels in the GARM. This
effectively results in a segmentation approach which is then, rather than simply segmenting a
source image, is segmenting an image whose primary region and boundary information will be
further taken into account yielding improved segmentation.

In Chapter 4 a series of experiments was carried out to investigate and validate the wavelet
packet texture descriptors outlined in Chapter 3. Segmentation tests were performed on sets of
both real-world and medical images with a heavier focus being placed on the latter due to the
emphasis placed in this thesis on improving the quality of histopathological image
segmentations. The results of these experiments confirmed that boundary enhancement of images
by means of wavelet packets can have a significant impact on the quality and accuracy of a 2-
class texture segmentation problem with interesting improvements made in comparison to both
the ACM and GARM. Having established that the proposed method had a positive effect on
snake, boundary and region-based segmentation quality, the next step was to assess the estimated
amount of improvement offered over existing methods. For this purpose ground truth images
based on the average of three externally hand-segmented sources were treated as a series of
deformed points.

Results from the ACM, GARM and WB-GARM were then evaluated based on the quantity of
these points through which their segmented foreground boundaries passed through successfully.
This test concluded that the proposed wavelet based texture descriptors resulted in an improved
segmentation in each of the tests presented when compared to the other two methods being
evaluated. A relationship between the contrast of histopathlogical images and segmentation
quality was also found to affect segmentation accuracy and once established, this was shown in
some instances to provide further improvements in the segmentation quality.

95
5.2 Conclusions

This thesis has investigated the effects of multi-scale wavelet packet sub-bands with boundary
enhancement on the quality of a texture segmentation using an existing model - the Geodesic
Active Region Model. Both of these factors combined together have been shown to positively
affect the output of a texture segmentation algorithm such that a significant increase in
segmentation accuracy may be observed in many cases. To the authors knowledge, the above
points have not been explicitly addressed previously as part of a combined boundary
enhancement routine in texture image segmentation previously.

As part of a supervised segmentation problem, the WB-GARM was demonstrated in both normal
and contrast-based cases as having improved the accuracy of texture segmentation when
compared to the output generated by existing models and texture descriptors when applied to the
same image. This was shown to be true for a variety of images and it was concluded that wavelet
packet texture descriptors could offer a computationally inexpensive means of improving
segmentation results in contour based segmentation models. As with the GARM, the WB-
GARM does not perform well with source images of a very poor quality, however as mentioned,
they are capable of producing segmentations of a higher degree of accuracy with images of a
respectable quality.

A texture descriptor has been modified to take into account additional hidden boundary and edge
information from a source image through wavelet packet sub-bands. Through empirical
observations, this has been used to create a segmentation enhancement routine which may be
used as a pre-processing step. The application of this texture descriptor to textured images of
varying size and type reduced the segmentation error associated with using the conventional
segmentation model. In addition, the relative increase in segmentation quality suggests that this
method has the potential to improve the image segmentation of pictures across a variety of
different applications including the glandular segmentation of colon biopsy images and object
selection in laser-guided surgery.

Future work in this area could see the introduction of an algorithm for intelligently selecting the
most descriptive wavelet packet sub-bands for use as part of the proposed texture descriptor -
this could include the best basis if a cost function was supplied for a particular application or

96
image type. The current selection method uses a fixed set of sub-bands from different scales for
each segmentation, however, narrowing this down to only those sub-bands which provide the
most useful edge and boundary information could offer further improvements in segmentation
quality with textured images. This would allow the WB-GARM to become an even more
powerful tool for accurate texture segmentation.

97
Bibliography

1.K. Tingelhoff, K.E.G. Eichjorn -"Analysis of manual segmentation in paranasal CT


images", In European Archives of Oto-Rhino-Laryngology. Springer, Berlin. Volume
265, Pages 1061-1070. September. 2008.
2.Liapis, S., Sifakis, E. and Tziritas, G - "Colour and texture segmentation using wavelet
frame analysis, deterministic relaxation and fast marching algorithms" In Journal of
Visual Communication and Image Representation, Volume 15, Pages 1-26, 2004.
3.Y. Sinal Akgul -"Image Guided Surgery", Video/Image Modelling and Synthesis Lab,
Dept. Computer Science and Information Sciences, Delarware, Newark, Delware 19716.
4.R.S. Brindle, "Serial Composition", Oxford University Press, 1966.
5.R. Azencott, J. Wang, L. Younes - "Texture Classification using Windowed Fourier
Filters", In IEEE Transactions on Pattern Analysis and Machine Intelligence archive,
Volume 19, Issue 2, Pages 148-153, 1997.
6.J. O. Smith III - "Mathematics of the Discrete Fourier Transform (DFT)", W3K Publishing,
2007.
7.A.Jenoubi, A. Jianchao Zeng, Chouikha, M.F - “Top-down approach to segmentation of
prostate boundaries in ultrasound images ”. In Proceedings of the 33rd Applied Imagery
Pattern Recognition Workshop. Pages 13-15. Oct. 2004
8.L. Li, D. Chen, S. Lakare, K. Kreeger, I. Bitter, A. Kaufman, M. R. Wax, P. M. Djuric, Z.
Liang - " An Image Segmentation Approach to Extract Colon Lumen through Colonic
Material Tagging and Hidden Markov Random Field Model for Virtual Colonoscopy ,"
In SPIE 2002 Symposium on Medical Imaging, San Diego, CA, USA, February 2002
9.A.Todman, R. Naguib, M. Bennen - “Visual characteristics of colon images.” IEEE
CCECE, 2001.
10. K.Brady, I.Jermyn, J.Zerubia - "Adaptive probabilistic models of wavelet packets for the
analysis and segmentation of textured remote sensing images", British Machine Vision
Conference, 2003
11. Y.Meyer, R.Coifman - "Wavelets", In Cambridge Studies in Advanced Mathematics,
Volume 48, Cambridge University Press, Cambridge, 1997.
12. V. Caselles, F. Catte, T. Coll, et al - "A Geometric Model for Active Contours in Image
Processing", In umerische Mathematik, Pages: 1-31 October 1993.
13. Rafael C. Gonzalez, "Digital Image Processing Using Matlab", Prentice Hall, 12 Feb
2004.
14. G.Mu, H.Zhai, S. Zhang - "Non-linearly weighted fuzzy correlation for colour-image
retrieval", In Chinese Optics Letters, Volume 1, Issue 10, Pages: 583-584. 2003.
15. Pichumani, R - “Model-based vision”, URL:
http://www.visionbib.com/bibliography/match597.html#TT45493, CV-Online, 1997.
16. N.R, S. Marshall. - "The use of genetic algorithms in morphological filter design". In
Signal Processing: Image Communication, Volume 8, Pages: 55-71, Jan 1996.
17. S.C.Zhu - "Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for
Multiband Image Segmentation", In IEEE Transactions on Pattern Analysis and Machine
Intelligence, Volume 18, Pages: 884-900, 1996
18. E. Wesley, W. E. Snyder, H. Qi - "Machine Vision", Cambridge University Press,
Cambridge, UK, 2004.

98
19. H. Choi, R. Baraniuk - "Analysis of wavelet-domain Wiener filters", In Proceedings of
IEEE--SP International Symposium on Time-frequency and Time-scale Analysis, Pages:
613-616, October 1998.
20. J. Chen, T. N. Pappas, A. Mojsilovic, B. E. Rogowitz - "Adaptive perceptual colour-
texture image segmentation," In IEEE Trans. Image Processing, Volume 14, Pages:
1524-1536, Oct. 2005.
21. F. W. Campbell, J. G. Robson - "Application of Fourier analysis to the visibility of
gratings,", Journel of Physiolology (London), Volume197, Pages 551-566, 1968.
22. S.J. Osher , J. A. Sethian - "Fronts propagating with curvature dependent speed:
algorithms based on Hamilton-Jacobi formulations", In Journal of Computational
Physics, Pages: 12-49, 1988
23. Gonzalez, R. C.Woods - "Digital Image Processing, First Edition" Addison-Wesley,
1992.
24. Y.Mallet, D. Coomans, J. Kautsky, O. De vel - "Classification using Adaptive Wavelets
for Feature Extraction", In IEEE Transactions on Pattern Analysis and Machine
Intelligence, Volume 19, No. 10, Pages: 1058-1066, October 1997.
25. L.Zhang, D. Zhang - “Characterization of palmprints by wavelet signatures via
directional context modeling", In IEEE Transactions on System, Man and Cybernetic,
Part B. Volume 34, Pages: 1335-1347, June, 2004.
26. A.H Bhalerao, R. Wilson – “Unsupervised image segmentation combing region and
boundary estimation”, in IVC Volume 19 – umber 6, Pages: 353-368, April 2001.
27. Qian, Y.T, Zhao, R.C - "Image Segmentation Based on Combination of the Global and
Local Information", In ICIP97, Volume 1, Pages: 204-207, 1997.
28. W.A Perkins - "Region segmentation of Images by Expansion and Contraction of Edge
points", In IJCAI79, Pages: 699-701, 1979.
29. A.Gouze, C. De Roover, A. Herbulot, E. Debreuve, B. Macq and M. Barlaud -
"Watershed-driven active contours for moving object segmentation". In IEEE
International Conference on Image Processing, September 2005.
30. T.Abe, Y.Matsuzawa - "Region Extraction with Multiple Active Contour Models". In
Workshop on Digital and Computational Video. Pages 56-63. 2001
31. R. J. Lapeer, A. C. Tan, R. Aldridge - "Active Watersheds: Combining 3D Watershed
Segmentation and Active Contours to Extract Abdominal Organs from MR Images". In
MICCAI. Pages: 596-603. 2002.
32. M. Kass, A. Witkin, D. Terzopoulos - "Snakes: active contour models". In First
International Conference on Computer Vision, Pages 259-268, 1987.
33. S. J. Osher , J. A. Sethian - "Fronts propagating with curvature dependent speed:
algorithms based on Hamilton-Jacobi formulations", In Journal of Computational
Physics, Pages: 12-49, 1988.
34. L.D Cohen, R.Kimmel - “Global Minimum for Active Contour Models: A Minimal Path
Approach”, International Journal of Computer Vision.", 1997.
35. V. Caselles, F. Catte, T. Coll, et al - "A Geometric Model for Active Contours in Image
Processing", In umerische Mathematik, Pages: 1-31 October 1993.
36. R. Malladi, J. Sethian, B. Vemuri - "Shape modeling with front propagation: a level set
approach", In IEEE Trans. Pattern Anal. Machine Intelligence, Volume 17, Pages: 158-
175. 1995.
37. G.A. Giraldi, L.M Gonçalves, A.F Oliveira - "Dual topologically adaptable snakes", In
Proceedings of the Fifth Joint Conference on In-formation Sciences (JCIS'2000, Vol. 2),
Third In-ternational Conference on Computer Vision, Pattern Recognition, and Image
Processing, Pages: 103-106, February 2000.

99
38. H. Delingette, J. Montagnat - "Shape and topology constraints on parametric active
contours", Computer Vision and Image Understanding, Volume. 83, Pages 140-171,
2001.
39. J.S Suri, A.Farag – “Deformable Models: Theory and Biomedical Applications”,
Pages:35-38, Springer, 2007.
40. M. Kim , J. Choi , D. Kim - "A VOP generation tool: automatic segmentation of moving
objects in image sequences based on spatio-temporal information. Circuits and Systems
for Video Technology", In IEEET. Volume 8, December 1999.
41. J.A Sethian, Hogea.- "Computational Modelling of Solid Tumor Evolution via a General
Cartesian Mesh/level set method", In Fluid Dynamics & Materials Processing, Volume.
1, February 2005.
42. G. Aubert , L. Blanc Féraud -"Some remarks on the equivalence between classical snakes
and geodesic active contours", In International Journal of Computer Vision, Volume 34,
1999.
43. P. Kornprobst, R.Deriche, G.Aubert - "Image Sequence Restoration: A PDE Based
Coupled Method for Image Restoration and Motion Segmentation". In Proceedings of the
5th European Conference on Computer Vision, Volume 2, Pages: 548-562, 1998.
44. S.Kichenassamy, A.Kumar, A J. Yezzi - "Gradient Flows and Geometric Active Contour
Models", In Proceedings of the Fifth International Conference on Computer Vision,
Pages: 810-815, 1995.
45. N. Paragios, O. Mellina-Gottardo, and V. Ramesh -"Gradient vector flow fast geometric
active contours". In IEEE Trans. Pattern Anal. Machine Intelligence, Volume 1, Pages:
67-73, 2004.
46. T. Chan and L. Vese - “An active contour model without edges,” In International
Conference on Scale-Space Theories in Computer Vision, Pages 141-151, 1999.
47. D. Mumford, J. Shah- "Boundary detection by minimizing functionals". In Proceedings
of IEEE ICASSP, Pages: 22-26, 1985.
48. A. Chakraborty, L. H. Staib, J. S. Duncan - "An Integrated Approach to Boundary
Finding in Medical Images", In IEEE Workshop on Biomedical Image Analysis, Pages:
13-22, 1994.
49. N. Paragios, R. Deriche - "Geodesic Active regions for Supervised Texture
Segmentation". In IEEE International Conference on Computer Vision (ICCV), Pages:
926-932, Corfu, Greece, 1999.
50. N. Paragios, R. Deriche - "Geodesic Active Regions: A New Framework to Deal with
Frame Partition Problems in Computer Vision", In Journal of Visual Communication and
Image Representation 13, Pages: 249- 268, 2002
51. T.Brox, M.Rousson, R. Deriche, J.Weickert - "Unsupervised segmentation incorporating
colour, texture, and motion" - In Computer Analysis of Images and Patterns, volume
2756 of Lecture Notes in Computer Science, 2003.
52. D. Cremers, T. Kohlberger, C. Schnorr - "Nonlinear Shape Statistics in Mumford-Shah
Based Segmentation". In European Conference on Computer Vision, Volume 2, Pages:
93-108, Copenhagen, Denmark, June 2002.
53. T. Brox, J. Weickert - "Level Set Segmentation with multiple regions". In IEEE
Transactions on Image Processing, Volume 15, Pages 3213-3218, 2006.
54. A. Cohen, I. Daubechies - "Non-separable bidimensional wavelet bases," In Revista
Matematica Iberoamericana, Volume 9, Pages: 51-137, 1993.
55. J. S. Duncan, N. Ayache - "Medical Image Analysis: Progress over Two Decades and the
Challenges Ahead," In IEEE Transactions on Pattern Analysis and Machine Intelligence,
Volume 22, Pages: 85-106, January 2000

100
56. C.P Lee, W.E. Snyder, C.Wang -"Supervised Multispectral Image Segmentation using
Active Contours", In International Conference on Automation and Robotics, Barcelona,
Pages: 4242- 4247, April, 2005.
57. W.E.L. Grimson, G.J. Ettinger, T. Kapur, M.E. Leventon, W.M. Wells III, R. Kikinis -
"Utilizing Segmented MRI Data in Image-Guided Surgery." In International Journal of
Pattern Recognition and Artificial Intelligence, Pages: 1367-1397, July 9th, 1996.
58. Gonzales, Woods. “Digital Image Processing, Second Edition”, Prentice Hall, January,
2002.
59. Imatest measurements: Image quality factors, Imatest, – URL:
(http://www.imatest.com/docs/iqf.html)
60. J.Sachs, “Sharpening filters”. Digital Light and Colour, 1999.
61. E. Mach -“Mach Bands – On the effect of the spatial distribution of the light stimulus on
the retina” Pages 253-71. 1865.
62. W.K. Pratt, J.Wiley & Sons, Digital Image Processing (second edition), Addison-
Wesley, New York, USA, 1991.
63. N. Paragios, R. Deriche - "Geodesic Active Regions: A New Framework to Deal with
Frame Partition Problems in Computer Vision", In Journal of Visual Communication and
Image Representation 13, Pages: 249- 268, 2002
64. W. E. Higgs, D.F. Dunn - “Gabor filter design for multiple texture segmentation”, In
Journal of Optical Engineering, Volume 35, Pages: 2852-2863,1996.
65. R.C. Staunton, M. Li - “Unsupervised Texture Segmentation Based on Immune Genetic
Algorithms and Fuzzy Clustering”, 8th International Conference on Signal Processing, In
IEEE ICSP'06, Volume 2, Pages: 957-961, November 2006.
66. R.R Coifman, Y. Meyer, M.V. Wickerhauser -"Wavelet analysis and signal processing,",
In In Wavelets and their Applications, Pages: 153-178, 1992.
67. N. Paragios, R. Deriche - “Geodesic Active Contours for Supervised Texture
Segmentation”. In IEEE Computer Society Conference Computer Vision and Pattern
Recognition, Volume 2, 1999.
68. Ian Kaplan -"Applying the Haar Wavelet Transform to Time Series Information" - URL:
http://www.bearcave.com/misl/misl_tech/wavelets/haar.html, Bearcave, February 2004.
69. F.H. Elfouly, M.I. Mahmoud, M.I. M. Dessouky, and Salah Deya - “Comparison between
Haar and Daubechies Wavelet Transformations on FPGA Technology” In International
Journal of Computer, Information, and Systems Science, and Engineering, Winter 2008.
70. D.F.Walnut - "An Introduction to Wavelet Analysis: With 88 Figures", Birkhäuser
Boston, January 2004.
71. A. Jense, A. La Cour-Harbo - “Ripples in Mathematics: The Discrete Wavelet
Transform”, Springer, 2001.
72. A.Linderhead, S. Sjokvist, S.Nyberg, M.Uppsall, C. Gronwall, P.Andersson, D. Letalick
- “Temporal analysis for land mine detection “. In Proceedings of the 4th International
Symposium on Image and Signal Processing and Analysis (ISPA 2005), Pages: 389- 394.
September 2005.
73. L. Debnath - "Wavelet Transforms and their applications", Birkhauser Boston. November
2001.
74. C. T. Leondes -“Medical Imaging Systems Technology / Analysis And Computational
Methods (V. 1)”. World Scientific Publishing Co Pte Ltd, 2005
75. J. Ivins, J. Porrill - “Active Region Models For Segmenting Medical Images”, First IEEE
International Conference on Image Processing (ICIP'94; Austin, Texas): Volume 2,
Pages: 227-231. November 1994.

101
76. A. Chakraborty, L. Staib, and J. Duncan - “Deformable Boundary Finding in Medical
Images by Integrating Gradient and Region In-formation,” In IEEE Trans. in Medical
Imaging, Volume 15, no. 6, Pages 859–870, Dec. 1996.
77. R. Ronfard - “Region-Based Strategies for Active Contour Models,”. In International
Journal of Computer Vision, Volume. 2, Pages 229-251. January 1994.
78. M. Hernandez, A. Frang - “Non-parametric geodesic active regions: method and
evaluation for cerebral aneurysms segmentation in 3DRA and CTA",
Spain Computational Imaging Lab, Department of Technology, Pompeu Fabra
University, Barcelona, 2007.
79. O.Ecabert, J.Thiran - "Variational Image Segmentation by Unifying Region and
Boundary Information," In 16th International Conference on Pattern Recognitio(n
ICPR'02). Volume 2, Page(s): 885 - 888, 2002
80. J. Bigun, G. H. Granlund, J. Wiklund. "Multidimensional orientation estimation with
applications to texture analysis and optical flow", In IEEE Transactions on Pattern
Analysis and Machine Intelligence, Pages: 775–790, August 1991.
81. T. Brox, M. Rousson, R. Deriche, J. Weickert - “Unsupervised segmentation
incorporating colour, texture, and motion”. In IRIA, Technical Report 4760, March
2003.
82. D. Xiano, O. Jun -“Study of Colour Image Enhancement Based on Wavelet Analysis”, In
IPSJ Technical Reports, Volume 93. Pages: 35-40. 2006.
83. P. Sakellaropoulos, L.Costaridou, G. Panayiotakis - “A Wavelet-based Spatially
Adaptive Method for Mammographic Contrast Enhancement.” In Physics in Medicine
and Biology. Volume 48, Pages 787-803, 2003.
84. A. A. Michelson - “Studies in Optics”, University of Chicago Press, Chicago,
Illinois, USA, 1927
85. A. Marion, “An Introduction to Image Processing”, Chapman and Hall, Pages 242 – 244,
1991.
86. R.Fisher, S. Perkins, A. Walker and E. Wolfart, "Pixel Addition" – URL:
http://homepages.inf.ed.ac.uk/rbf/HIPR2/pixadd.htm, Department of Informatics, The
University of Edinburgh, 2003.
87. C.Braccini, L.De Floriani, G.Vernazza - "Image Analysis and Processing", In 8th
International Conference (ICIAP '95), Springer, San Remo, Italy, September 1995.
88. D.A. Burkhardt and P. K. Fahey -“Contrast Enhancement and Distributed Encoding by
Bipolar Cells in the Retina“, In The Journal of europhysiology, Volume. 80, Pages
1070-1081, September 1998.
89. J.Becker - "Scanning Tunneling Microscope Computer Automation", In Surface Science,
Pages 200-209, 1987.
90. R.Owens - “Spatial Domain methods”.
URL:http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT5/no
de3.html , University of Edinburgh, School of Informatics. 1997.
91. S.Pizer, J. Austin, R. Cromartie - “Adaptive Histogram Equalization and its Variations”.
In Computer Vision, Graphics and Image Processing. Volume 39. Number 3 – Page:
355-368. 1987.
92. S.Wook, K. Lee, Y. Bae, K. Min, M. Kwon.- "Genetic classification of colorectal cancer
based on chromosomal loss and microsatellite instability predicts survival.", Clinical
Cancer Research, Seoul 137-701, Korea. (7):2311-22, 2002
93. K.Masood, N.M Rajpoot - “Classification of Colon Biopsy Samples by Spatial Analysis
of a Single Spectral Band from its Hyperspectral Cube”. In Proceedings Medical Image
Understanding and Analysis 2007.

102
94. K.Masood, N.Rajpoot, H.Qureshi, N.Rajpoot -"Hyperspectral Texture Analysis for Colon
Tissue Biopsy Classification", In International Symposium on Health Informatics and
Bioinformatics, Turkey 2007.
95. M.Sato, S. Lakare, M. Wan, A. Kaufman, Z. Liang, M. Wax - "An automatic colon
segmentation for 3D virtual colonoscopy". In IEICE Trans. Information and Systems,
Volume. E84-D, No. 1, Pages 201-208, 2001
96. A.Toddman, Naguib, M.K. Bennett, M.K - “Visual characterisation of colon images”, In
Proceedings of Medical Image Understanding and Analysis (MIUA 2001), Pages 161-
164. 2001.
97. R.L. Koretz - “Malignant polyps: are they sheep in wolves' clothing?”. In Ann Intern
Med. 118:63-8, 1993.
98. J.S Mandel, J.H Bond, T.R Church, D.C Snover, G.M Bradley, L.M Schuman, F Ederer -
"Reducing mortality from colorectal cancer by screening for fecal occult blood.
Minnesota Colon Cancer Control Study." - In  Engl J Med, Volume 328, Issue 19,
Pages 1365-71. May 1993.
99. L.Li, D. Chen, S. Lakare, K. Kreeger, I. Bitter, A. Kaufman, M. R. Wax, P. M. Djuric,
and Z. Liang - "An Image Segmentation Approach to Extract Colon Lumen through
Colonic Material Tagging and Hidden Markov Random Field Model for Virtual
Colonoscopy ," In SPIE 2002 Symposium on Medical Imaging, San Diego, CA, February
2002.
100. L. Hong, A. Kaufman, T. Wei,M. Wax - "3D Virtual Colonoscopy", In Proc. Symposium
on Biomedical Visualization. Pages 26-32. 1995.
101. L. Chen, N. Tokuda and A. Nagai - “Robustness of Regional Matching Over Global
Matching -Experiments and Applications to Eigenface-Based Face Recognition”, :
Proceedings of 2001 International Conference on Intelligent Multimedia and Distance
Education, Pages 38-47, John Wiley & Sons, Inc, 2001.
102. A. Todman, R. Naguib, M. Bennen - “Visual characteristics of colon images.” IEEE
CCECE, 2001.
103. K. Chen, D. Wang, X. Liu - “Weight adaptation and oscillatory correlation for image
segmentation”, In IEEE Transactions on eural networks, Volume 11, Issue 5, Pages: 1106
-1123, September 2000.
104. L.P. Clarke, R.P. Velthuizen,M.A. Camacho, J.J. Heine- "MRI segmentation: methods
and applications.", In Magnetic. Resonance Imaging,Volume: 13, Issue: 3, Pages 343-36,
1995
105. R.M Summers, J.Yao, P.J Pickhardt, M.Franaszek, I.Bitter, D.Brickman, V.Krishna, J.R
Choi - "Computed tomographic virtual colonoscopy computer-aided polyp detection in a
screening population". In Gastroenterology. Pages 1832-1844. December 2005.
106. I.Bitter, R. Van Uitert, R.Summers - “Detection of Colon Wall Outer Boundary and
Segmentation of the Colon Wall Based on Level Set Methods”, International Conference of
the IEEE Engineering in Medicine and Biology Society. 2006.
107. Cao, Tavanapong, Li - “Parsing and browsing tools for colonoscopy videos”. In
Proceedings of the 12th annual ACM international conference on Multimedia New York,
NY, USA. Pages: 844 – 851. 2004
108. V. Raad, A. Bradley - "Active Contour Model Based Segmentation of Colposcopy
Images of Cervix Uteri Using Gaussian Pyramids". In 6th International Symposium on
Digital Signal Processing for Communication Systems (DSPCS'02), Sydney, Australia.
January, 2002

103

You might also like